[zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Arjun YK
Hi,

I am trying to use ZFS for boot, and kind of confused about how the boot
paritions like /var to be layed out.

With old UFS, we create /var as sepearate filesystem to avoid various logs
filling up the / filesystem

With ZFS, during the OS install it gives the option to Put /var on a
separate dataset, but no option is given to set quota. May be, others set
quota manually.

So, I am trying to understand what's the best practice for /var in ZFS. Is
that exactly same as in UFS or is there anything different ?

Could someone share some thoughts ?


Thanks
Arjun
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Joe Auty


  
  
Hello,
  
  I'm debating an OS change and also thinking about my options
  for data migration to my next server, whether it is on new or
  the same hardware.
  
  Migrating to a new machine I understand is a simple matter of
  ZFS send/receive, but reformatting the existing drives to host
  my existing data is an area I'd like to learn a little more
  about. In the past I've asked about this and was told that it
  is possible to do a send/receive to accommodate this, and IIRC
  this doesn't have to be to a ZFS server with the same number
  of physical drives?
  
  How about getting a little more crazy... What if this entire
  server temporarily hosting this data was a VM guest running
  ZFS? I don't foresee this being a problem either, but with so
  much at stake I thought I would double check :) When I say
  temporary I mean simply using this machine as a place to store
  the data long enough to wipe the original server, install the
  new OS to the original server, and restore the data using this
  VM as the data source.
  
  Also, more generally, is ZFS send/receive mature enough that
  when you do data migrations you don't stress about this? Piece
  of cake? The difficulty of this whole undertaking will
  influence my decision and the whole timing of all of this. 
  
I'm also thinking that a ZFS
  VM guest might be a nice way to maintain a remote backup of
  this data, if I can install the VM image on a drive/partition
  large enough to house my data. This seems like it would be a
  little less taxing than rsync cronjobs?



-- 
  
  Joe Auty, NetMusician
  NetMusician
  helps musicians, bands and artists create beautiful,
  professional, custom designed, career-essential websites that are
  easy
  to maintain and to integrate with popular social networks.
  www.netmusician.org
  j...@netmusician.org
  

  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Lori Alt

 On 04/ 6/11 07:59 AM, Arjun YK wrote:

Hi,

I am trying to use ZFS for boot, and kind of confused about how the 
boot paritions like /var to be layed out.


With old UFS, we create /var as sepearate filesystem to avoid various 
logs filling up the / filesystem


I believe that creating /var as a separate file system was a common 
practice, but not a universal one.  It really depended on the 
environment and local requirements.




With ZFS, during the OS install it gives the option to Put /var on a 
separate dataset, but no option is given to set quota. May be, others 
set quota manually.


Having a separate /var dataset gives you the option of setting a quota 
on it later.  That's why we provided the option.  It was a way of 
enabling administrators to get the same effect as having a separate /var 
slice did with ufs.  Administrators can choose to use it or not, 
depending on local requirements.




So, I am trying to understand what's the best practice for /var in 
ZFS. Is that exactly same as in UFS or is there anything different ?


I'm not sure there's a defined best practice.  Maybe someone else can 
answer that question.  My guess is that in environments where, before, a 
separate ufs /var slice was used, a separate zfs /var dataset with a 
quota might now be appropriate.


Lori




Could someone share some thoughts ?


Thanks
Arjun


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread David Dyer-Bennet

On Tue, April 5, 2011 14:38, Joe Auty wrote:

 Migrating to a new machine I understand is a simple matter of ZFS
 send/receive, but reformatting the existing drives to host my existing
 data is an area I'd like to learn a little more about. In the past I've
 asked about this and was told that it is possible to do a send/receive
 to accommodate this, and IIRC this doesn't have to be to a ZFS server
 with the same number of physical drives?

The internal structure of the pool (how many vdevs, and what kind) is
irrelevant to zfs send / receive.  So I routinely send from a pool of 3
mirrored pairs of disks to a pool of one large drive, for example (it's
how I do my backups).   I've also gone the other way once :-( (It's good
to have backups).

I'm not 100.00% sure I understand what you're asking; does that answer it?

Mind you, this can be slow.  On my little server (under 1TB filled) the
full backup takes about 7 hours (largely because the single large external
drive is a USB drive; the bottleneck is the USB).  Luckily an incremental
backup is rather faster.

 How about getting a little more crazy... What if this entire server
 temporarily hosting this data was a VM guest running ZFS? I don't
 foresee this being a problem either, but with so much at stake I thought
 I would double check :) When I say temporary I mean simply using this
 machine as a place to store the data long enough to wipe the original
 server, install the new OS to the original server, and restore the data
 using this VM as the data source.

I haven't run ZFS extensively in VMs (mostly just short-lived small test
setups).  From my limited experience, and what I've heard on the list,
it's solid and reliable, though, which is what you need for that
application.

 Also, more generally, is ZFS send/receive mature enough that when you do
 data migrations you don't stress about this? Piece of cake? The
 difficulty of this whole undertaking will influence my decision and the
 whole timing of all of this.

A full send / receive has been reliable for a long time.  With a real
(large) data set, it's often a long run.  It's often done over a network,
and any network outage can break the run, and at that point you start
over, which can be annoying.  If the servers themselves can't stay up for
10 or 20 hours you presumably aren't ready to put them into production
anyway :-).

 I'm also thinking that a ZFS VM guest might be a nice way to maintain a
 remote backup of this data, if I can install the VM image on a
 drive/partition large enough to house my data. This seems like it would
 be a little less taxing than rsync cronjobs?

I'm a big fan of rsync, in cronjobs or wherever.  What it won't do is
properly preserve ZFS ACLs, and ZFS snapshots, though.  I moved from using
rsync to using zfs send/receive for my backup scheme at home, and had
considerable trouble getting that all working (using incremental
send/receive when there are dozens of snapshots new since last time).  But
I did eventually get up to recent enough code that it's working reliably
now.

If you can provision big enough data stores for your VM to hold what you
need, that seems a reasonable approach to me, but I haven't tried anything
much like it, so my opinion is, if you're very lucky, maybe worth what you
paid for it.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Erik Trimble

On 4/6/2011 7:50 AM, Lori Alt wrote:

On 04/ 6/11 07:59 AM, Arjun YK wrote:

Hi,

I am trying to use ZFS for boot, and kind of confused about how the 
boot paritions like /var to be layed out.


With old UFS, we create /var as sepearate filesystem to avoid various 
logs filling up the / filesystem


I believe that creating /var as a separate file system was a common 
practice, but not a universal one.  It really depended on the 
environment and local requirements.




With ZFS, during the OS install it gives the option to Put /var on a 
separate dataset, but no option is given to set quota. May be, 
others set quota manually.


Having a separate /var dataset gives you the option of setting a quota 
on it later.  That's why we provided the option.  It was a way of 
enabling administrators to get the same effect as having a separate 
/var slice did with ufs.  Administrators can choose to use it or not, 
depending on local requirements.




So, I am trying to understand what's the best practice for /var in 
ZFS. Is that exactly same as in UFS or is there anything different ?


I'm not sure there's a defined best practice.  Maybe someone else 
can answer that question.  My guess is that in environments where, 
before, a separate ufs /var slice was used, a separate zfs /var 
dataset with a quota might now be appropriate.


Lori




Could someone share some thoughts ?


Thanks
Arjun




Traditionally, the reason for a separate /var was one of two major items:

(a)  /var was writable, and / wasn't - this was typical of diskless or 
minimal local-disk configurations. Modern packaging systems are making 
this kind of configuration increasingly difficult.


(b) /var held a substantial amount of data, which needed to be handled 
separately from /  - mail and news servers are a classic example



For typical machines nowdays, with large root disks, there is very 
little chance of /var suddenly exploding and filling /  (the classic 
example of being screwed... wink).  Outside of the above two cases, 
about the only other place I can see that having /var separate is a good 
idea is for certain test machines, where you expect frequent  memory 
dumps (in /var/crash) - if you have a large amount of RAM, you'll need a 
lot of disk space, so it might be good to limit /var in this case by 
making it a separate dataset.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Gary Mills
On Wed, Apr 06, 2011 at 08:08:06AM -0700, Erik Trimble wrote:
On 4/6/2011 7:50 AM, Lori Alt wrote:
On 04/ 6/11 07:59 AM, Arjun YK wrote:
  
I'm not sure there's a defined best practice.  Maybe someone else
can answer that question.  My guess is that in environments where,
before, a separate ufs /var slice was used, a separate zfs /var
dataset with a quota might now be appropriate.
Lori

Traditionally, the reason for a separate /var was one of two major
items:
(a)  /var was writable, and / wasn't - this was typical of diskless or
minimal local-disk configurations. Modern packaging systems are making
this kind of configuration increasingly difficult.
(b) /var held a substantial amount of data, which needed to be handled
separately from /  - mail and news servers are a classic example
For typical machines nowdays, with large root disks, there is very
little chance of /var suddenly exploding and filling /  (the classic
example of being screwed... wink).  Outside of the above two cases,
about the only other place I can see that having /var separate is a
good idea is for certain test machines, where you expect frequent
memory dumps (in /var/crash) - if you have a large amount of RAM,
you'll need a lot of disk space, so it might be good to limit /var in
this case by making it a separate dataset.

People forget (c), the ability to set different filesystem options on
/var.  You might want to have `setuid=off' for improved security, for
example.

-- 
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Torrey McMahon


On 4/6/2011 11:08 AM, Erik Trimble wrote:

Traditionally, the reason for a separate /var was one of two major items:

(a)  /var was writable, and / wasn't - this was typical of diskless or 
minimal local-disk configurations. Modern packaging systems are making 
this kind of configuration increasingly difficult.


(b) /var held a substantial amount of data, which needed to be handled 
separately from /  - mail and news servers are a classic example



For typical machines nowdays, with large root disks, there is very 
little chance of /var suddenly exploding and filling /  (the classic 
example of being screwed... wink).  Outside of the above two cases, 
about the only other place I can see that having /var separate is a 
good idea is for certain test machines, where you expect frequent  
memory dumps (in /var/crash) - if you have a large amount of RAM, 
you'll need a lot of disk space, so it might be good to limit /var in 
this case by making it a separate dataset.


Some more info ala (b) - The something filled up the root fs and the 
box crashed problem was fixed for awhile ago. It's still a drag 
cleaning up an errant process that is filling up a file system but it 
shouldn't crash/panic anymore. However, old habits die hard, especially 
at government sites where the rules require a papal bull to be changed, 
so I think the option was left to keep folks happy more than any 
practical reason.


I'm sure someone has a really good reason to keep /var separated but 
those cases are fewer and far between than I saw 10 years ago.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread David Magda
On Wed, April 6, 2011 10:51, David Dyer-Bennet wrote:

 I'm a big fan of rsync, in cronjobs or wherever.  What it won't do is
 properly preserve ZFS ACLs, and ZFS snapshots, though.  I moved from using
 rsync to using zfs send/receive for my backup scheme at home, and had
 considerable trouble getting that all working (using incremental
 send/receive when there are dozens of snapshots new since last time).  But
 I did eventually get up to recent enough code that it's working reliably
 now.

You may be interested in these scripts:

http://www.freshports.org/sysutils/zfs-replicate/
http://www.freshports.org/sysutils/zxfer/

Not sure how FreeBSD-specific these are, but one was originally written
for (Open)Solaris AFAICT.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread David Magda
On Wed, April 6, 2011 11:29, Gary Mills wrote:

 People forget (c), the ability to set different filesystem options on
 /var.  You might want to have `setuid=off' for improved security, for
 example.

Or better yet: exec=off,devices=off. Another handy one could be
compression=on (or a even gzip-[1-9]).

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot destroy snapshot

2011-04-06 Thread Paul Kraus
On Tue, Apr 5, 2011 at 6:56 PM, Rich Morris rich.mor...@oracle.com wrote:
 On 04/05/11 17:29, Ian Collins wrote:

 If there are clones then zfs destroy should report that.  The error being
 reported is dataset is busy which would be reported if there are user
 holds on the snapshots that can't be deleted.

 Try running zfs holds zpool-01/dataset-01@1299636001

xxx zfs holds zpool-01/dataset-01@1299636001
NAME   TAGTIMESTAMP
zpool-01/dataset-01@1299636001  .send-18440-0  Tue Mar 15 20:00:39 2011
xxx zfs holds zpool-01/dataset-01@1300233615
NAME   TAGTIMESTAMP
zpool-01/dataset-01@1300233615  .send-18440-0  Tue Mar 15 20:00:47 2011
xxx

That is what I was looking for. Looks like when a zfs send got
killed it left a hanging lock (hold) around. I assume the next
export/import (not likely as this is a production zpool) or a reboot
(will happen eventually, and I can wait) these will clear. Unless
there is a way to force clear the hold.

Thanks Rich.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot destroy snapshot

2011-04-06 Thread Paul Kraus
On Tue, Apr 5, 2011 at 9:26 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

 This may not apply to you, but in some other unrelated situation it was
 useful...

 Try zdb -d poolname
 In an older version of zpool, under certain conditions, there would
 sometimes be hidden clones listed with a % in the name.  Maybe the % won't
 be there in your case, but maybe you have some other manifestation of the
 hidden clone problem?

I have seen the dataset with a '%' in the name, but that was
during a zfs recv (and if the zfs recv dies, then it sometimes hangs
around and has to be destroyed, and the zfs destroy claims to fail
even though it succeeds ;-), but not in this case. The snapshots are
all valid (I just can't destroy two of them), we are snapshotting on a
fairly frequent basis as we are loading data.

Thanks for the suggestion.

xxx zdb -d zpool-01
Dataset mos [META], ID 0, cr_txg 4, 18.7G, 745 objects
Dataset zpool-01/dataset-01@1302019202 [ZPL], ID 140, cr_txg 654658,
38.9G, 990842 objects
Dataset zpool-01/dataset-01@1302051600 [ZPL], ID 158, cr_txg 655776,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302062401 [ZPL], ID 189, cr_txg 656162,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1301951162 [ZPL], ID 108, cr_txg 652292,
1.02M, 478 objects
Dataset zpool-01/dataset-01@1302087601 [ZPL], ID 254, cr_txg 657065,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302105601 [ZPL], ID 291, cr_txg 657710,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302058800 [ZPL], ID 164, cr_txg 656033,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1299636001 [ZPL], ID 48, cr_txg 560375,
1.12T, 28468324 objects
Dataset zpool-01/dataset-01@1302007173 [ZPL], ID 125, cr_txg 654202,
1.09M, 506 objects
Dataset zpool-01/dataset-01@1302055201 [ZPL], ID 161, cr_txg 655905,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302080401 [ZPL], ID 248, cr_txg 656807,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302044400 [ZPL], ID 152, cr_txg 655518,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1301950939 [ZPL], ID 106, cr_txg 652280,
1.02M, 478 objects
Dataset zpool-01/dataset-01@1302015602 [ZPL], ID 137, cr_txg 654530,
10.3G, 175879 objects
Dataset zpool-01/dataset-01@1302030001 [ZPL], ID 143, cr_txg 655029,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1300233615 [ZPL], ID 79, cr_txg 594951,
4.48T, 99259515 objects
Dataset zpool-01/dataset-01@1302094801 [ZPL], ID 282, cr_txg 657323,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302066001 [ZPL], ID 214, cr_txg 656291,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302006933 [ZPL], ID 120, cr_txg 654181,
1.09M, 506 objects
Dataset zpool-01/dataset-01@1302098401 [ZPL], ID 285, cr_txg 657452,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302007755 [ZPL], ID 131, cr_txg 654240,
1.09M, 506 objects
Dataset zpool-01/dataset-01@1302048001 [ZPL], ID 155, cr_txg 655647,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302005414 [ZPL], ID 116, cr_txg 654119,
1.09M, 506 objects
Dataset zpool-01/dataset-01@1302007469 [ZPL], ID 128, cr_txg 654221,
1.09M, 506 objects
Dataset zpool-01/dataset-01@1302084001 [ZPL], ID 251, cr_txg 656936,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302076801 [ZPL], ID 245, cr_txg 656678,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302069601 [ZPL], ID 217, cr_txg 656420,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302073201 [ZPL], ID 242, cr_txg 656549,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302102001 [ZPL], ID 288, cr_txg 657581,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01@1302005162 [ZPL], ID 112, cr_txg 654101,
1.09M, 506 objects
Dataset zpool-01/dataset-01@1302012001 [ZPL], ID 134, cr_txg 654391,
1.18G, 63312 objects
Dataset zpool-01/dataset-01@1302004805 [ZPL], ID 110, cr_txg 654085,
1.09M, 506 objects
Dataset zpool-01/dataset-01@1302006769 [ZPL], ID 118, cr_txg 654171,
1.09M, 506 objects
Dataset zpool-01/dataset-01@1302091201 [ZPL], ID 257, cr_txg 657194,
71.1G, 1845553 objects
Dataset zpool-01/dataset-01 [ZPL], ID 84, cr_txg 439406, 71.1G, 1845553 objects
Dataset zpool-01 [ZPL], ID 16, cr_txg 1, 39.3K, 5 objects
xxx

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Paul Kraus
On Wed, Apr 6, 2011 at 10:51 AM, David Dyer-Bennet d...@dd-b.net wrote:

 On Tue, April 5, 2011 14:38, Joe Auty wrote:

 Also, more generally, is ZFS send/receive mature enough that when you do
 data migrations you don't stress about this? Piece of cake? The
 difficulty of this whole undertaking will influence my decision and the
 whole timing of all of this.

 A full send / receive has been reliable for a long time.  With a real
 (large) data set, it's often a long run.  It's often done over a network,
 and any network outage can break the run, and at that point you start
 over, which can be annoying.  If the servers themselves can't stay up for
 10 or 20 hours you presumably aren't ready to put them into production
 anyway :-).

At my employer we have about 20TB of data in one city and a zfs
replicated copy of it in another city. The data is spread out over 15
pools and over 200 datasets. The initial full replication of the
larger datasets (the largest is 3 TB) took days, the largest even took
close to two weeks. The incremental send/recv sessions are much
quicker, based on how much data has changed, but we run the
replication script every 4 hours and it usually completes before the
next scheduled run. Once we got past a few bugs in both my script and
the older zfs code (we are at zpool 22 and zfs 4 right now, we started
all this at zpool 10) the replications have been flawless.

 I'm also thinking that a ZFS VM guest might be a nice way to maintain a
 remote backup of this data, if I can install the VM image on a
 drive/partition large enough to house my data. This seems like it would
 be a little less taxing than rsync cronjobs?

 I'm a big fan of rsync, in cronjobs or wherever.  What it won't do is
 properly preserve ZFS ACLs, and ZFS snapshots, though.  I moved from using
 rsync to using zfs send/receive for my backup scheme at home, and had
 considerable trouble getting that all working (using incremental
 send/receive when there are dozens of snapshots new since last time).  But
 I did eventually get up to recent enough code that it's working reliably
 now.

We went with zfs send/recv over rsync for two big reasons, an
incremental zfs send is much, much faster than an rsync if you have
lots of files (our 20TB of data consists of 200 million files), and we
are leveraging zfs ACLs and need them preserved on the copy.

I have not tried zfs on a VM guest.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org wrote:

 How about getting a little more crazy... What if this entire server
 temporarily hosting this data was a VM guest running ZFS? I don't foresee
 this being a problem either, but with so


The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you can't downgrade a dataset, using snv_151a
and planning to send to Nexenta as a final step will trip you up unless you
explicitly create them with a lower version.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Paul Kraus
On Wed, Apr 6, 2011 at 1:14 PM, Brandon High bh...@freaks.com wrote:

 The only thing to watch out for is to make sure that the receiving datasets
 aren't a higher version that the zfs version that you'll be using on the
 replacement server. Because you can't downgrade a dataset, using snv_151a
 and planning to send to Nexenta as a final step will trip you up unless you
 explicitly create them with a lower version.

I thought I saw that with zpool 10 (or was it 15) the zfs send
format had been committed and you *could* send/recv between different
version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:

The format of the stream is committed. You will be  able
to receive your streams on future versions of ZFS.

-or- does this just mean upward compatibility ? In other words I can
send from pool 15 to pool 22 but not the other way around.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cannot destroy snapshot

2011-04-06 Thread Rich Morris

On 04/06/11 12:43, Paul Kraus wrote:

xxx zfs holds zpool-01/dataset-01@1299636001
NAME   TAGTIMESTAMP
zpool-01/dataset-01@1299636001  .send-18440-0  Tue Mar 15 20:00:39 2011
xxx zfs holds zpool-01/dataset-01@1300233615
NAME   TAGTIMESTAMP
zpool-01/dataset-01@1300233615  .send-18440-0  Tue Mar 15 20:00:47 2011
xxx

That is what I was looking for. Looks like when a zfs send got
killed it left a hanging lock (hold) around. I assume the next
export/import (not likely as this is a production zpool) or a reboot
(will happen eventually, and I can wait) these will clear. Unless
there is a way to force clear the hold.


The user holds won't be released by an export/import or a reboot.

zfs get defer_destroy snapname will show whether this snapshot is 
marked for
deferred destroy and zfs release .send-18440-0 snapname will clear 
that hold.
If the snapshot is marked for deferred destroy then the release of the 
last tag

will also destroy it.

-- Rich

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Lori Alt

 On 04/ 6/11 11:42 AM, Paul Kraus wrote:

On Wed, Apr 6, 2011 at 1:14 PM, Brandon Highbh...@freaks.com  wrote:


The only thing to watch out for is to make sure that the receiving datasets
aren't a higher version that the zfs version that you'll be using on the
replacement server. Because you can't downgrade a dataset, using snv_151a
and planning to send to Nexenta as a final step will trip you up unless you
explicitly create them with a lower version.

 I thought I saw that with zpool 10 (or was it 15) the zfs send
format had been committed and you *could* send/recv between different
version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:

The format of the stream is committed. You will be  able
to receive your streams on future versions of ZFS.

correct.


-or- does this just mean upward compatibility ? In other words I can
send from pool 15 to pool 22 but not the other way around.

It does mean upward compatibility only, but I believe that it's the 
dataset version that matters, not the pool version, and the dataset 
version has not changed as often as the pool version:


root@v40z-brm-02:/home/lalt/ztest# zfs get version rpool/export/home
NAME   PROPERTY  VALUESOURCE
rpool/export/home  version   5-
root@v40z-brm-02:/home/lalt/ztest# zpool get version rpool
NAME   PROPERTY  VALUESOURCE
rpool  version   32   default

(someone still on the zfs team please correct me if that's wrong.)

Lori





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Linder, Doug
Torrey Minton wrote:

 I'm sure someone has a really good reason to keep /var separated but those 
 cases are fewer and  far between than I saw 10 years ago.

I agree that the causes and repercussions are less now than they were a long 
time ago.  But /var still can and sometimes does fill up, and it is kind of 
handy to have quotas and separate filesystem settings and so on.

I guess there's no overall crying reason to use a separate /var, but there's 
always this argument: it can't hurt anything.  Especially with ZFS.  In the old 
days if /var was a separate partition then you risked making it too big or too 
small.  But given the flexibility of ZFS, I think the question is really is 
there any reason *not* to put /var on a separate ZFS filesystem?

Doug Linder
--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Richard Elling
On Apr 6, 2011, at 12:01 PM, Linder, Doug wrote:

 Torrey Minton wrote:
 
 I'm sure someone has a really good reason to keep /var separated but those 
 cases are fewer and  far between than I saw 10 years ago.
 
 I agree that the causes and repercussions are less now than they were a long 
 time ago.  But /var still can and sometimes does fill up, and it is kind of 
 handy to have quotas and separate filesystem settings and so on.
 
 I guess there's no overall crying reason to use a separate /var, but there's 
 always this argument: it can't hurt anything.  Especially with ZFS.  In the 
 old days if /var was a separate partition then you risked making it too big 
 or too small.  But given the flexibility of ZFS, I think the question is 
 really is there any reason *not* to put /var on a separate ZFS filesystem?

Yes. For backup/restore the unit of management is file system. More file systems
results in more complicated backup/restore that increases RTO and costs. This 
was
always the Achille's heel of separate /var.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
On Wed, Apr 6, 2011 at 10:42 AM, Paul Kraus pk1...@gmail.com wrote:
    I thought I saw that with zpool 10 (or was it 15) the zfs send
 format had been committed and you *could* send/recv between different
 version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs:

There is still a problem if the dataset version is too high. I
*believe* that a 'zfs send -R' should send the zfs version, and that
zfs receive will create any new datasets using that version. (I have a
received dataset here that's zfs v 4, whereas everything else in the
pool is v5.) As long as you don't do a zfs upgrade after that point,
you should be fine.

It's probably a good idea to check that the received versions are the
same as the source before doing a destroy though. ;-)

One other thing that I forgot to mention in my last mail too: If
you're receiving into a VM, make sure that the VM can manage
redundancy on its zfs storage, and not just multiple vdsk on the same
host disk / lun. Either give it access to the raw devices, or use
iSCSI, or create your vdsk on different luns and raidz them, etc.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss