Re: [zfs-discuss] openindiana-1 filesystem, time-slider, and snapshots

2012-10-16 Thread Jason
Ahh.. the illumos lists change the from address (or set reply-to), so that
a plain reply simply works.  Sorry about that :)

But yeah, when you update you probably want to snap/backup your current
BE.  It won't hurt anything to snap/backup the old ones, just that they'll
contain the non-updated versions of everything.

On Tue, Oct 16, 2012 at 2:48 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) <
opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:

> > From: jason.brian.k...@gmail.com [mailto:jason.brian.k...@gmail.com] On
> > Behalf Of Jason
> >
> > When you update our install (i.e. pkg image-update) unless you explicitly
> > specify a BE name, it names the BEs as 'openindiana-NN'.  The way that
> > pkg(5) works is that it snapshot+clones your current BE (i.e.
> > rpool/ROOT/openindiana if that is your current BE), then applies the
> updates
> > to the clone.  You can then use beadm to active the new one, or switch
> back,
> > etc.  It is unrelated to time slider (other than both will create zfs
> snapshots).
> >
> > That you have openindiana-1 suggests you've run pkg image-update at least
> > once (either via cmdline or GUI).  beadm should tell you which one is
> active
> > (i.e. you're running on), also you can also do df -h / or mount and see
> which
> > fs is actually mounted as /
>
> (I just noticed this reply was sent to me off-list.  Do you mind resending
> it on-list, and I'll send this reply again on-list?)
>
> That agrees with what I originally thought.  So here is the confusion:
>
> zfs list -t snapshot | grep '^rpool/ROOT/openindiana@' | grep daily
> rpool/ROOT/openindiana@zfs-auto-snap_daily-2012-10-16-12h04
>   0  -  3.25G  -
>
> Every day, the above changes.  And every day, a new entry appears below:
>
> zfs list -t snapshot | grep '^rpool/ROOT/openindiana-1@' | grep daily
> rpool/ROOT/openindiana-1@zfs-auto-snap_daily-2012-10-12-12h04
>307K  -  3.36G  -
> rpool/ROOT/openindiana-1@zfs-auto-snap_daily-2012-10-13-12h04
>563K  -  3.36G  -
> rpool/ROOT/openindiana-1@zfs-auto-snap_daily-2012-10-14-12h04
>443K  -  3.37G  -
> rpool/ROOT/openindiana-1@zfs-auto-snap_daily-2012-10-15-12h04
>   1.59M  -  3.37G  -
> rpool/ROOT/openindiana-1@zfs-auto-snap_daily-2012-10-16-12h04
>324K  -  3.38G  -
>
> I ran "beadm list" as suggested, and it confirmed what I've been
> suspecting since starting this thread - The current BE is openindiana-1
>
> It would seem, I simply need to be aware, I should backup whatever is
> specified as "NR" by beadm.  Because a new filesystem was created and
> became the place where files actually are being updated...  It is not safe
> for me to simply backup "openindiana" and keep it that way moving forward.
>  Every time I "pkg update" I need to be aware, to update my backup scripts
> accordingly, to backup the new BE instead of the old one.
>
> Sound about right?
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] openindiana-1 filesystem, time-slider, and snapshots

2012-10-16 Thread Darren J Moffat



On 10/16/12 14:54, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

Can anyone explain to me what the openindiana-1 filesystem is all
about?I thought it was the "backup" copy of the openindiana filesystem,
when you apply OS updates, but that doesn't seem to be the case...

I have time-slider enabled for rpool/ROOT/openindiana.It has a daily
snapshot (amongst others).But every day when the new daily snap is
taken, the old daily snap rotates into the rpool/ROOT/openindiana-1
filesystem.This is messing up my cron-scheduled "zfs send" script -
which detects that the rpool/ROOT/openindiana filesystem no longer has
the old daily snapshot, and therefore has no snapshot in common with the
receiving system, and therefore sends a new full backup every night.

To make matters more confusing, when I run "mount" and when I zfs get
all | grep -i mount, I see / on rpool/ROOT/openindiana-1


It is a new boot environment see beadm(1M) - you must have done some 
'pkg update' or 'pkg install' option that created a new BE.




It would seem, I shouldn't be backing up openindiana, but instead,
backup openindiana-1?I would have sworn, out-of-the-box, there was no
openindiana-1.Am I simply wrong?


Initially there wouldn't have been.

Are you doing the zfs send on your own or letting time-slider do it for 
you ?


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] openindiana-1 filesystem, time-slider, and snapshots

2012-10-16 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
Can anyone explain to me what the openindiana-1 filesystem is all about?  I 
thought it was the "backup" copy of the openindiana filesystem, when you apply 
OS updates, but that doesn't seem to be the case...

I have time-slider enabled for rpool/ROOT/openindiana.  It has a daily snapshot 
(amongst others).  But every day when the new daily snap is taken, the old 
daily snap rotates into the rpool/ROOT/openindiana-1 filesystem.  This is 
messing up my cron-scheduled "zfs send" script - which detects that the 
rpool/ROOT/openindiana filesystem no longer has the old daily snapshot, and 
therefore has no snapshot in common with the receiving system, and therefore 
sends a new full backup every night.

To make matters more confusing, when I run "mount" and when I zfs get all | 
grep -i mount, I see / on rpool/ROOT/openindiana-1

It would seem, I shouldn't be backing up openindiana, but instead, backup 
openindiana-1?  I would have sworn, out-of-the-box, there was no openindiana-1. 
 Am I simply wrong?

My expectation is that rpool/ROOT/openindiana should have lots of snaps 
available...  3 frequent: one every 15 mins, 23 hourly: one every hour, 6 
daily: one every day, 4 weekly: one every 7 days, etc.

I checked to ensure auto-snapshot service is enabled.  I checked svccfg to 
ensure I understood the correct interval, keep, and period (as described above.)

I have the expected behavior (as I described, the expected behavior according 
to my expectations) on rpool/export/home/eharvey...  But the behavior is 
different on rpool/ROOT/openindiana, even though, as far as I can tell, I have 
the same settings for both.  That is, simply, com.sun:auto-snapshot=true

One more comment - I recall, when I first configured time-slider, they have a 
threshold, default 80% pool used before they automatically bump off old 
snapshots (or stop taking new snaps, I'm not sure what the behavior is).  I 
don't see that setting anywhere I look, using svccfg or zfs get.

My pools are pretty much empty right now.  Nowhere near the 80% limit.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss