Re: svn commit: r247485 - in stable/9: crypto/openssh crypto/openssh/openbsd-compat secure/lib/libssh secure/usr.sbin/sshd

2013-03-02 Thread Mike Tancsa
On 3/1/2013 10:06 PM, Mike Tancsa wrote:
 On 3/1/2013 3:34 PM, Dag-Erling Smørgrav wrote:
 Mike Tancsa m...@sentex.net writes:
 Dag-Erling Smørgrav d...@des.no writes:
 Are you sure this was due to the OpenSSH update, and not the OpenSSL
 update a few days ago?  Can you try to roll back to r247484?
 I didnt think openssl got updated on RELENG_9 ?

 Ah, you're right.  There is an OpenSSL commit immediately before my
 OpenSSH commit in src/secure, but it's from last July :)

 Can you try to connect against each version in turn while running
 tcpdump or wireshark and show me the pre-kex handshake and proposal
 exchange (basically, everything that's transmitted in cleartext) in both
 cases?
 
 The pcaps and basic wireshark output at
 
 http://tancsa.com/openssh/

This PR looks to be related

http://lists.freebsd.org/pipermail/freebsd-bugs/2012-September/050139.html

---Mike

-- 
---
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada   http://www.tancsa.com/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: Musings on ZFS Backup strategies

2013-03-02 Thread Ronald Klop
On Fri, 01 Mar 2013 21:34:39 +0100, Daniel Eischen deisc...@freebsd.org  
wrote:



On Fri, 1 Mar 2013, Ben Morrow wrote:


Quoth Daniel Eischen deisc...@freebsd.org:


Yes, we still use a couple of DLT autoloaders and have nightly
incrementals and weekly fulls.  This is the problem I have with
converting to ZFS.  Our typical recovery is when a user says
they need a directory or set of files from a week or two ago.
Using dump from tape, I can easily extract *just* the necessary
files.  I don't need a second system to restore to, so that
I can then extract the file.


As Karl said originally, you can do that with snapshots without having
to go to your backups at all. With the right arrangements (symlinks to
the .zfs/snapshot/* directories, or just setting the snapdir property to
'visible') you can make it so users can do this sort of restore
themselves without having to go through you.


It wasn't clear that snapshots were traversable as a normal
directory structure.  I was thinking it was just a blob
that you had to roll back to in order to get anything out
of it.


That is the main benefit of snapshots. :-) You can also very easily diff  
files between them.

Mostly a lot of data is static so it does not cost a lot to keep snapshots.
There are a lot of scripts online and in ports which make a nice retention  
policy like e.g. 7 daily snaphots, 8 weekly, 12 monthly, 2 yearly. See  
below for (an incomplete list of) what I keep about my homedir at home.



Under our current scheme, we would remove snapshots
after the next (weekly) full zfs send (nee dump), so
it wouldn't help unless we kept snapshots around a
lot longer.


Why not.


Am I correct in assuming that one could:

   # zfs send -R snapshot | dd obs=10240 of=/dev/rst0

to archive it to tape instead of another [system:]drive?


Yes, your are correct. The manual page about zfs send says: 'The format of  
the stream is committed. You will be able to receive your streams on  
future versions of ZFS.'



Ronald.



tank/home 115G  65.6G   
53.6G  /home
tank/home@auto-2011-10-25_19.00.yearly   16.3G  -   
56.8G  -
tank/home@auto-2012-06-06_22.00.yearly   5.55G  -   
53.3G  -
tank/home@auto-2012-09-02_20.00.monthly  2.61G  -   
49.3G  -
tank/home@auto-2012-10-15_06.00.monthly  2.22G  -   
49.9G  -
tank/home@auto-2012-11-26_13.00.monthly  2.47G  -   
50.2G  -
tank/home@auto-2013-01-07_13.00.monthly  2.56G  -   
51.5G  -
tank/home@auto-2013-01-21_13.00.weekly   1.06G  -   
52.4G  -
tank/home@auto-2013-01-28_13.00.weekly409M  -   
52.3G  -
tank/home@auto-2013-02-04_13.00.monthly   625M  -   
52.5G  -
tank/home@auto-2013-02-11_13.00.weekly689M  -   
52.5G  -
tank/home@auto-2013-02-16_13.00.weekly   17.7M  -   
52.5G  -
tank/home@auto-2013-02-17_13.00.daily17.7M  -   
52.5G  -
tank/home@auto-2013-02-18_13.00.daily17.9M  -   
52.5G  -

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Make use of RACCT and rctl

2013-03-02 Thread Peter Ankerstål
Hi!

Im trying to limit memory usage for jails with the rctl API. But I don't really 
get it.

I have compiled the kernel with the right options and rctl show me stuff like:
jail:jail22:memoryuse:deny=268435456
jail:jail22:swapuse:deny=268435456
jail:jail20:memoryuse:deny=268435456
jail:jail20:swapuse:deny=268435456
jail:jail16:memoryuse:deny=268435456
jail:jail16:swapuse:deny=268435456

but when I try to allocate memory it doesn't seem to hit the limit. Also when I 
run 
# rctl -u jail:jail20
cputime=0
datasize=0
stacksize=0
coredumpsize=0
memoryuse=0
memorylocked=0
maxproc=0
openfiles=0
vmemoryuse=0
pseudoterminals=0
swapuse=0
nthr=0
msgqqueued=0
msgqsize=0
nmsgq=0
nsem=0
nsemop=0
nshm=0
shmsize=0
wallclock=0

it seems that no accounting is done. What's missing? Cant find anything in the 
manuals.

# uname -srm
FreeBSD 9.1-RELEASE-p1 amd64


/Peter.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Musings on ZFS Backup strategies

2013-03-02 Thread Ronald Klop
On Fri, 01 Mar 2013 18:55:22 +0100, Volodymyr Kostyrko c.kw...@gmail.com  
wrote:



01.03.2013 16:24, Karl Denninger:

Dabbling with ZFS now, and giving some thought to how to handle backup
strategies.

ZFS' snapshot capabilities have forced me to re-think the way that I've
handled this.  Previously near-line (and offline) backup was focused on
being able to handle both disasters (e.g. RAID adapter goes nuts and
scribbles on the entire contents of the array), a double-disk (or worse)
failure, or the obvious (e.g. fire, etc) along with the aw crap, I just
rm -rf'd something I'd rather not!

ZFS makes snapshots very cheap, which means you can resolve the aw
crap situation without resorting to backups at all.  This turns the
backup situation into a disaster recovery one.

And that in turn seems to say that the ideal strategy looks more like:

Take a base snapshot immediately and zfs send it to offline storage.
Take an incremental at some interval (appropriate for disaster recovery)
and zfs send THAT to stable storage.

If I then restore the base and snapshot, I get back to where I was when
the latest snapshot was taken.  I don't need to keep the incremental
snapshot for longer than it takes to zfs send it, so I can do:

zfs snapshot pool/some-filesystem@unique-label
zfs send -i pool/some-filesystem@base pool/some-filesystem@unique-label
zfs destroy pool/some-filesystem@unique-label

and that seems to work (and restore) just fine.


Yes, I'm working with backups the same way, I wrote a simple script that  
synchronizes two filesystems between distant servers. I also use the  
same script to synchronize bushy filesystems (with hundred thousands of



Your filesystems grow a lot of hair? :-)





files) where rsync produces a too big load for synchronizing.

https://github.com/kworr/zfSnap/commit/08d8b499dbc2527a652cddbc601c7ee8c0c23301

I left it where it was but I was also planning to write some purger for  
snapshots that would automatically purge snapshots when pool gets low on  
space. Never hit that yet.



Am I looking at this the right way here?  Provided that the base backup
and incremental are both readable, it appears that I have the disaster
case covered, and the online snapshot increments and retention are
easily adjusted and cover the oops situations without having to resort
to the backups at all.

This in turn means that keeping more than two incremental dumps offline
has little or no value; the second merely being taken to insure that
there is always at least one that has been written to completion without
error to apply on top of the base.  That in turn makes the backup
storage requirement based only on entropy in the filesystem and not time
(where the tower of Hanoi style dump hierarchy imposed both a time AND
entropy cost on backup media.)


Well, snapshots can pose a value in a longer timeframe depending on  
data. Being able to restore some file accidentally deleted two month ago  
already saved 2k$ for one of our customers.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: svn commit: r247485 - in stable/9: crypto/openssh crypto/openssh/openbsd-compat secure/lib/libssh secure/usr.sbin/sshd

2013-03-02 Thread Dag-Erling Smørgrav
Mike Tancsa m...@sentex.net writes:
 This PR looks to be related

 http://lists.freebsd.org/pipermail/freebsd-bugs/2012-September/050139.html

That suggests a bug in the aesni driver...

Can you ktrace sshd in both cases?  My guess is the difference is that
the new version uses hw offloading while the old version doesn't.

DES
-- 
Dag-Erling Smørgrav - d...@des.no
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: svn commit: r247485 - in stable/9: crypto/openssh crypto/openssh/openbsd-compat secure/lib/libssh secure/usr.sbin/sshd

2013-03-02 Thread Mike Tancsa
On 3/2/2013 10:33 AM, Dag-Erling Smørgrav wrote:
 Mike Tancsa m...@sentex.net writes:
 This PR looks to be related

 http://lists.freebsd.org/pipermail/freebsd-bugs/2012-September/050139.html
 
 That suggests a bug in the aesni driver...

OK, but the above uses the glxsb driver, not the aesni driver. Also, if
at was aesni, would it not show up in the geli tests I did ?

 
 Can you ktrace sshd in both cases?  My guess is the difference is that
 the new version uses hw offloading while the old version doesn't.

Done.  Both files are at http://www.tancsa.com/openssh

---Mike

-- 
---
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada   http://www.tancsa.com/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: svn commit: r247485 - in stable/9: crypto/openssh crypto/openssh/openbsd-compat secure/lib/libssh secure/usr.sbin/sshd

2013-03-02 Thread Dag-Erling Smørgrav
Mike Tancsa m...@sentex.net writes:
 The pcaps and basic wireshark output at

 http://tancsa.com/openssh/

This is 6.1 with aesni vs 6.1 without aesni; what I wanted was 6.1 vs
5.8, both with aesni loaded.

Could you also ktrace the server in both cases?

An easy workaround is to change the list of ciphers the server will
offer to clients by adding a Ciphers line in /etc/ssh/sshd_config.
The default is:

Ciphers 
aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour

Either remove the AES entries or move them further down the list.  The
client will normally pick the first supported cipher.  As far as I can
tell, SecureCRT supports all the same ciphers that OpenSSH does, so just
moving arcfour{256,128} to the front of the list should work.

(AFAIK, arcfour is also much faster than aes)

DES
-- 
Dag-Erling Smørgrav - d...@des.no
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: svn commit: r247485 - in stable/9: crypto/openssh crypto/openssh/openbsd-compat secure/lib/libssh secure/usr.sbin/sshd

2013-03-02 Thread Mike Tancsa
On 3/2/2013 11:02 AM, Dag-Erling Smørgrav wrote:
 Mike Tancsa m...@sentex.net writes:
 The pcaps and basic wireshark output at

 http://tancsa.com/openssh/
 
 This is 6.1 with aesni vs 6.1 without aesni; what I wanted was 6.1 vs
 5.8, both with aesni loaded.

Ahh, ok. I will do it later this aft.

 
 Could you also ktrace the server in both cases?

That was the daemon in both cases.  ktrace /usr/sbin/sshd -

 
 An easy workaround is to change the list of ciphers the server will
 offer to clients by adding a Ciphers line in /etc/ssh/sshd_config.
 The default is:
 
 Ciphers 
 aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour
 
 Either remove the AES entries or move them further down the list.  The
 client will normally pick the first supported cipher.  As far as I can
 tell, SecureCRT supports all the same ciphers that OpenSSH does, so just
 moving arcfour{256,128} to the front of the list should work.
 
 (AFAIK, arcfour is also much faster than aes)

Actually, I am just doing with a freebsd openssh client

 ssh -c aes128-cbc testhost-with-the-issue.sentex.ca

Its for sure something to do with hardware crypto offload because it
works fine with a cipher that is not accelerated.


---Mike

 
 DES


-- 
---
Mike Tancsa, tel +1 519 651 3400
Sentex Communications, m...@sentex.net
Providing Internet services since 1994 www.sentex.net
Cambridge, Ontario Canada   http://www.tancsa.com/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: svn commit: r247485 - in stable/9: crypto/openssh crypto/openssh/openbsd-compat secure/lib/libssh secure/usr.sbin/sshd

2013-03-02 Thread Dag-Erling Smørgrav
Dag-Erling Smørgrav d...@des.no writes:
 This is 6.1 with aesni vs 6.1 without aesni; what I wanted was 6.1 vs
 5.8, both with aesni loaded.

On second thought, I don't need more pcaps.

DES
-- 
Dag-Erling Smørgrav - d...@des.no
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: Make use of RACCT and rctl

2013-03-02 Thread Edward Tomasz Napierała
Wiadomość napisana przez Peter Ankerstål w dniu 2 mar 2013, o godz. 16:21:
 Hi!
 
 Im trying to limit memory usage for jails with the rctl API. But I don't 
 really get it.
 
 I have compiled the kernel with the right options and rctl show me stuff like:
 jail:jail22:memoryuse:deny=268435456
 jail:jail22:swapuse:deny=268435456
 jail:jail20:memoryuse:deny=268435456
 jail:jail20:swapuse:deny=268435456
 jail:jail16:memoryuse:deny=268435456
 jail:jail16:swapuse:deny=268435456
 
 but when I try to allocate memory it doesn't seem to hit the limit. Also when 
 I run 
 # rctl -u jail:jail20
 cputime=0

[..]

Could you please do jls jid name and verify that a jail named jail20 is 
actually
running?

-- 
If you cut off my head, what would I say?  Me and my head, or me and my body?

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: Make use of RACCT and rctl

2013-03-02 Thread Peter Ankerstål

On Mar 2, 2013, at 5:15 PM, Edward Tomasz Napierała tr...@freebsd.org wrote:
 
 
 [..]
 
 Could you please do jls jid name and verify that a jail named jail20 is 
 actually
 running?
 
 -- 
 If you cut off my head, what would I say?  Me and my head, or me and my body?
 
 
Oh! 
My bad, I thought it was the name from rc.conf. But of course it is the name 
from the -n
flag. Now everything seems to work. Thanks!
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: svn commit: r247485 - in stable/9: crypto/openssh crypto/openssh/openbsd-compat secure/lib/libssh secure/usr.sbin/sshd

2013-03-02 Thread Ian Lepore
On Sat, 2013-03-02 at 17:02 +0100, Dag-Erling Smørgrav wrote:
 Mike Tancsa m...@sentex.net writes:
  The pcaps and basic wireshark output at
 
  http://tancsa.com/openssh/
 
 This is 6.1 with aesni vs 6.1 without aesni; what I wanted was 6.1 vs
 5.8, both with aesni loaded.
 
 Could you also ktrace the server in both cases?
 
 An easy workaround is to change the list of ciphers the server will
 offer to clients by adding a Ciphers line in /etc/ssh/sshd_config.
 The default is:
 
 Ciphers 
 aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour
 
 Either remove the AES entries or move them further down the list.  The
 client will normally pick the first supported cipher.  As far as I can
 tell, SecureCRT supports all the same ciphers that OpenSSH does, so just
 moving arcfour{256,128} to the front of the list should work.
 
 (AFAIK, arcfour is also much faster than aes)

The last time I tried to affect the chosen cypher by manipulating the
order of the list items in the config files was a couple years ago, but
I found then that you just can't do that.  The client side, not the
server, decides on the order, and it's based on compiled-in ordering
within the client code (not the client config).  From the server side
the only thing you can do to affect the order is leave items out of the
list (it will still try the remaining list items in the client-requested
order).

All of this was with OpenSSH_5.4p1_hpn13v11 FreeBSD-20100308, OpenSSL
0.9.8q 2 Dec 2010 and may be completely out of date now.

-- Ian


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Make use of RACCT and rctl

2013-03-02 Thread Edward Tomasz Napierała
Wiadomość napisana przez Peter Ankerstål w dniu 2 mar 2013, o godz. 17:18:
 On Mar 2, 2013, at 5:15 PM, Edward Tomasz Napierała tr...@freebsd.org wrote:
 
 
 [..]
 
 Could you please do jls jid name and verify that a jail named jail20 is 
 actually
 running?
 
 -- 
 If you cut off my head, what would I say?  Me and my head, or me and my body?
 
 
 Oh! 
 My bad, I thought it was the name from rc.conf. But of course it is the name 
 from the -n
 flag. Now everything seems to work. Thanks!

You're welcome.  Please Cc: me if you have any further questions or comments.

-- 
If you cut off my head, what would I say?  Me and my head, or me and my body?

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

Re: Musings on ZFS Backup strategies

2013-03-02 Thread David Magda
On Mar 1, 2013, at 21:14, Ben Morrow wrote:

 But since ZFS doesn't support POSIX.1e ACLs that's not terribly
 useful... I don't believe bsdtar/libarchive supports NFSv4 ACLs yet.

Ah yes, just noticed that. Thought it did.

https://github.com/libarchive/libarchive/wiki/TarNFS4ACLs

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Musings on ZFS Backup strategies

2013-03-02 Thread Peter Jeremy
On 2013-Mar-01 08:24:53 -0600, Karl Denninger k...@denninger.net wrote:
If I then restore the base and snapshot, I get back to where I was when
the latest snapshot was taken.  I don't need to keep the incremental
snapshot for longer than it takes to zfs send it, so I can do:

zfs snapshot pool/some-filesystem@unique-label
zfs send -i pool/some-filesystem@base pool/some-filesystem@unique-label
zfs destroy pool/some-filesystem@unique-label

and that seems to work (and restore) just fine.

This gives you an incremental since the base snapshot - which will
probably grow in size over time.  If you are storing the ZFS send
streams on (eg) tape, rather than receiving them, you probably still
want the Towers of Hanoi style backup hierarchy to control your
backup volume.  It's also worth noting that whilst the stream will
contain the compression attributes of the filesystem(s) in it, the
actual data is the stream in uncompressed

This in turn means that keeping more than two incremental dumps offline
has little or no value; the second merely being taken to insure that
there is always at least one that has been written to completion without
error to apply on top of the base.

This is quite a critical point with this style of backup: The ZFS send
stream is not intended as an archive format.  It includes error
detection but no error correction and any error in a stream renders
the whole stream unusable (you can't retrieve only part of a stream).
If you go this way, you probably want to wrap the stream in a FEC
container (eg based on ports/comms/libfec) and/or keep multiple copies.

The recommended approach is to do zfs send | zfs recv and store a
replica of your pool (with whatever level of RAID that meets your
needs).  This way, you immediately detect an error in the send stream
and can repeat the send.  You then use scrub to verify (and recover)
the replica.

(Yes, I know, I've been a ZFS resister ;-))

Resistance is futile. :-)

On 2013-Mar-01 15:34:39 -0500, Daniel Eischen deisc...@freebsd.org wrote:
It wasn't clear that snapshots were traversable as a normal
directory structure.  I was thinking it was just a blob
that you had to roll back to in order to get anything out
of it.

Snapshots appear in a .zfs/snapshot/SNAPSHOT_NAME directory at each
mountpoint and are accessible as a normal read-only directory
hierarchy below there.  OTOH, the send stream _is_ a blob.

Am I correct in assuming that one could:

   # zfs send -R snapshot | dd obs=10240 of=/dev/rst0

to archive it to tape instead of another [system:]drive?

Yes.  The output from zfs send is a stream of bytes that you can treat
as you would any other stream of bytes.  But this approach isn't
recommended.

-- 
Peter Jeremy


pgp61ijyBCuu8.pgp
Description: PGP signature


Re: Musings on ZFS Backup strategies

2013-03-02 Thread Karl Denninger

On 3/2/2013 4:14 PM, Peter Jeremy wrote:
 On 2013-Mar-01 08:24:53 -0600, Karl Denninger k...@denninger.net wrote:
 If I then restore the base and snapshot, I get back to where I was when
 the latest snapshot was taken.  I don't need to keep the incremental
 snapshot for longer than it takes to zfs send it, so I can do:

 zfs snapshot pool/some-filesystem@unique-label
 zfs send -i pool/some-filesystem@base pool/some-filesystem@unique-label
 zfs destroy pool/some-filesystem@unique-label

 and that seems to work (and restore) just fine.
 This gives you an incremental since the base snapshot - which will
 probably grow in size over time.  If you are storing the ZFS send
 streams on (eg) tape, rather than receiving them, you probably still
 want the Towers of Hanoi style backup hierarchy to control your
 backup volume.  It's also worth noting that whilst the stream will
 contain the compression attributes of the filesystem(s) in it, the
 actual data is the stream in uncompressed
I noted that.  The script I wrote to do this looks at the compression
status in the filesystem and, if enabled, pipes the data stream through
pbzip2 on the way to storage.  The only problem with this presumption is
that for database data filesystems the best practices say that you
should set the recordsize to that of the underlying page size of the
dbms (e.g. 8k for Postgresql) for best performance and NOT enable
compression.

Reality however is that the on-disk format of most database files is
EXTREMELY compressible (often WELL better than 2:1), so I sacrifice
there.  I think the better option is to stuff a user parameter into the
filesystem attribute table (which apparently I can do without boundary)
telling the script whether or not to compress on output so it's not tied
to the filesystem's compression setting.

I'm quite-curious, in fact, as to whether the best practices really
are in today's world.  Specifically, for a CPU-laden machine with lots
of compute power I wonder if enabling compression on the database
filesystems and leaving the recordsize alone would be a net performance
win due to the reduction in actual I/O volume.  This assumes you have
the CPU available, of course, but that has gotten cheaper much faster
than I/O bandwidth has.

 This in turn means that keeping more than two incremental dumps offline
 has little or no value; the second merely being taken to insure that
 there is always at least one that has been written to completion without
 error to apply on top of the base.
 This is quite a critical point with this style of backup: The ZFS send
 stream is not intended as an archive format.  It includes error
 detection but no error correction and any error in a stream renders
 the whole stream unusable (you can't retrieve only part of a stream).
 If you go this way, you probably want to wrap the stream in a FEC
 container (eg based on ports/comms/libfec) and/or keep multiple copies.
That's no more of a problem than it is for a dump file saved on a disk
though, is it?  While restore can (putatively) read past errors on a
tape, in reality if the storage is a disk and part of the file is
unreadable the REST of that particular archive is unreadable.  Skipping
unreadable records does sorta work for tapes, but it rarely if ever
does for storage onto a spinning device within the boundary of the
impacted file.

In practice I attempt to cover this by (1) saving the stream to local
disk and then (2) rsync'ing the first disk to a second in the same
cabinet.  If the file I just wrote is unreadable I should discover it at
(2), which hopefully is well before I actually need it in anger.  Disk
#2 then gets rotated out to an offsite vault on a regular schedule in
case the building catches fire or similar.  My exposure here is to
time-related bitrot which is a non-zero risk but I can't scrub a disk
that's sitting in a vault, so I don't know that there's a realistic
means around this risk other than a full online hotsite that I can
ship the snapshots to (which I don't have the necessary bandwidth or
storage to cover.)

If I change the backup media (currently UFS formatted) to ZFS formatted
and dump directly there via a zfs send/receive I could run both drives
as a mirror instead of rsync'ing from one to the other after the first
copy is done, then detach the mirror to rotate the drive out and attach
the other one, causing a resilver.  That's fine EXCEPT if I have a
controller go insane I now probably lose everything other than the
offsite copy since everything is up for write during the snapshot
operation.  That ain't so good and that's a risk I've had turn into
reality twice in 20 years.  On the upside if the primary has an error on
it I catch it when I try to resilver as that operation will fail since
the entire data structure that's on-disk and written has to be traversed
and the checksums should catch any silent corruption. If that happens I
know I'm naked (other than the vault copy which I hope is good!) until I
replace the 

Re: Musings on ZFS Backup strategies

2013-03-02 Thread Steven Hartland
- Original Message - 
From: Karl Denninger k...@denninger.net

Reality however is that the on-disk format of most database files is
EXTREMELY compressible (often WELL better than 2:1), so I sacrifice
there.  I think the better option is to stuff a user parameter into the
filesystem attribute table (which apparently I can do without boundary)
telling the script whether or not to compress on output so it's not tied
to the filesystem's compression setting.

I'm quite-curious, in fact, as to whether the best practices really
are in today's world.  Specifically, for a CPU-laden machine with lots
of compute power I wonder if enabling compression on the database
filesystems and leaving the recordsize alone would be a net performance
win due to the reduction in actual I/O volume.  This assumes you have
the CPU available, of course, but that has gotten cheaper much faster
than I/O bandwidth has.


We've been using ZFS compression on mysql filesystems for quite some
time and have good success with it. It is dependent on the HW as
you say though so you need to know where the bottleneck is in your
system, cpu or disk.

mysql 5.6 also added better recordsize support which could be interesting.

Also be aware of the additional latency the compression can add. I'm
also not 100% sure that the compression in ZFS scales beyond one core
its been something I've meant to look in to / test but not got round
to.

   Regards
   Steve


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Musings on ZFS Backup strategies

2013-03-02 Thread John
The recommended approach is to do zfs send | zfs recv and store a
replica of your pool (with whatever level of RAID that meets your
needs).  This way, you immediately detect an error in the send stream
and can repeat the send.  You then use scrub to verify (and recover)
the replica.

I do zfs send | zfs recv from several machines to a backup server in a
different building. Each day an incremental send is done using the previous
day's incremental send as the base. One reason for this approach is to minimize
the amount of bandwidth required since one of the machines is across a T1.

This technique requires keeping a record of the current base snapshot for each
filesystem, and a system in place to keep from destroying the base snapshot.
I learned the latter the hard way when a machine went down for several days,
and when it came back up the script that destroys out-of-date snapshots deleted
the incremental base snapshot.

I'm running 9.1-stable with zpool features on my machines, and with this upgrade
came zfs hold and zfs release. This allows you to lock a snapshot so it can't
be destroyed until it's released. With this feature, I do the following for
each filesystem:

zfs send -i yesterdays_snapshot todays_snapshot | ssh backup_server zfs recv
on success:
  zfs hold todays_snapshot
  zfs release yesterdays_snapshot
  ssh backup_server zfs hold todays_snapshot
  ssh backup_server zfs release yesterdays_snapshot
  update zfs_send_dates file with filesystem and snapshot name


John Theus
TheUsGroup.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Musings on ZFS Backup strategies

2013-03-02 Thread Karl Denninger
Quoth Ben Morrow:
 I don't know what medium you're backing up to (does anyone use tape any
 more?) but when backing up to disk I much prefer to keep the backup in
 the form of a filesystem rather than as 'zfs send' streams. One reason
 for this is that I believe that new versions of the ZFS code are more
 likely to be able to correctly read old versions of the filesystem than
 old versions of the stream format; this may not be correct any more,
 though.

 Another reason is that it means I can do 'rolling snapshot' backups. I
 do an initial dump like this

 # zpool is my working pool
 # bakpool is a second pool I am backing up to

 zfs snapshot -r zpool/fs at dump 
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 zfs send -R zpool/fs at dump 
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable | zfs recv -vFd 
 bakpool

 That pipe can obviously go through ssh or whatever to put the backup on
 a different machine. Then to make an increment I roll forward the
 snapshot like this

 zfs rename -r zpool/fs at dump 
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable dump-old
 zfs snapshot -r zpool/fs at dump 
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 zfs send -R -I @dump-old zpool/fs at dump 
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable | zfs recv -vFd 
 bakpool
 zfs destroy -r zpool/fs at dump-old 
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 zfs destroy -r bakpool/fs at dump-old 
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable

 (Notice that the increment starts at a snapshot called @dump-old on the
 send side but at a snapshot called @dump on the recv side. ZFS can
 handle this perfectly well, since it identifies snapshots by UUID, and
 will rename the bakpool snapshot as part of the recv.)

 This brings the filesystem on bakpool up to date with the filesystem on
 zpool, including all snapshots, but never creates an increment with more
 than one backup interval's worth of data in. If you want to keep more
 history on the backup pool than the source pool, you can hold off on
 destroying the old snapshots, and instead rename them to something
 unique. (Of course, you could always give them unique names to start
 with, but I find it more convenient not to.)

Uh, I see a potential problem here.

What if the zfs send | zfs recv command fails for some reason before
completion?  I have noted that zfs recv is atomic -- if it fails for any
reason the entire receive is rolled back like it never happened.

But you then destroy the old snapshot, and the next time this runs the
new gets rolled down.  It would appear that there's an increment
missing, never to be seen again.

What gets lost in that circumstance?  Anything changed between the two
times -- and silently at that? (yikes!)

-- 
-- Karl Denninger
/The Market Ticker ®/ http://market-ticker.org
Cuda Systems LLC
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Musings on ZFS Backup strategies

2013-03-02 Thread Ben Morrow
Quoth Karl Denninger k...@denninger.net:
 Quoth Ben Morrow:
  I don't know what medium you're backing up to (does anyone use tape any
  more?) but when backing up to disk I much prefer to keep the backup in
  the form of a filesystem rather than as 'zfs send' streams. One reason
  for this is that I believe that new versions of the ZFS code are more
  likely to be able to correctly read old versions of the filesystem than
  old versions of the stream format; this may not be correct any more,
  though.
 
  Another reason is that it means I can do 'rolling snapshot' backups. I
  do an initial dump like this
 
  # zpool is my working pool
  # bakpool is a second pool I am backing up to
 
  zfs snapshot -r zpool/fs at dump
  zfs send -R zpool/fs at dump | zfs recv -vFd bakpool
 
  That pipe can obviously go through ssh or whatever to put the backup on
  a different machine. Then to make an increment I roll forward the
  snapshot like this
 
  zfs rename -r zpool/fs at dump dump-old
  zfs snapshot -r zpool/fs at dump
  zfs send -R -I @dump-old zpool/fs at dump | zfs recv -vFd bakpool
  zfs destroy -r zpool/fs at dump-old
  zfs destroy -r bakpool/fs at dump-old
 
  (Notice that the increment starts at a snapshot called @dump-old on the
  send side but at a snapshot called @dump on the recv side. ZFS can
  handle this perfectly well, since it identifies snapshots by UUID, and
  will rename the bakpool snapshot as part of the recv.)
 
  This brings the filesystem on bakpool up to date with the filesystem on
  zpool, including all snapshots, but never creates an increment with more
  than one backup interval's worth of data in. If you want to keep more
  history on the backup pool than the source pool, you can hold off on
  destroying the old snapshots, and instead rename them to something
  unique. (Of course, you could always give them unique names to start
  with, but I find it more convenient not to.)
 
 Uh, I see a potential problem here.
 
 What if the zfs send | zfs recv command fails for some reason before
 completion?  I have noted that zfs recv is atomic -- if it fails for any
 reason the entire receive is rolled back like it never happened.
 
 But you then destroy the old snapshot, and the next time this runs the
 new gets rolled down.  It would appear that there's an increment
 missing, never to be seen again.

No, if the recv fails my backup script aborts and doesn't delete the old
snapshot. Cleanup then means removing the new snapshot and renaming the
old back on the source zpool; in my case I do this by hand, but it could
be automated given enough thought. (The names of the snapshots on the
backup pool don't matter; they will be cleaned up by the next successful
recv.)

 What gets lost in that circumstance?  Anything changed between the two
 times -- and silently at that? (yikes!)

It's impossible to recv an incremental stream on top of the wrong
snapshot (identified by UUID, not by its current name), so nothing can
get silently lost. A 'zfs recv -F' will find the correct starting
snapshot on the destination filesystem (assuming it's there) regardless
of its name, and roll forward to the state as of the end snapshot. If a
recv succeeds you can be sure nothing up to that point has been missed.

The worst that can happen is if you mistakenly delete the snapshot on
the source pool that marks the end of the last successful recv on the
backup pool; in that case you have to take an increment from further
back (which will therefore be a larger incremental stream than it needed
to be). The very worst case is if you end up without any snapshots in
common between the source and backup pools, and you have to start again
with a full dump.

Ben

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Musings on ZFS Backup strategies

2013-03-02 Thread Karl Denninger

On 3/2/2013 10:23 PM, Ben Morrow wrote:
 Quoth Karl Denninger k...@denninger.net:
 Quoth Ben Morrow:
 I don't know what medium you're backing up to (does anyone use tape any
 more?) but when backing up to disk I much prefer to keep the backup in
 the form of a filesystem rather than as 'zfs send' streams. One reason
 for this is that I believe that new versions of the ZFS code are more
 likely to be able to correctly read old versions of the filesystem than
 old versions of the stream format; this may not be correct any more,
 though.

 Another reason is that it means I can do 'rolling snapshot' backups. I
 do an initial dump like this

 # zpool is my working pool
 # bakpool is a second pool I am backing up to

 zfs snapshot -r zpool/fs at dump
 zfs send -R zpool/fs at dump | zfs recv -vFd bakpool

 That pipe can obviously go through ssh or whatever to put the backup on
 a different machine. Then to make an increment I roll forward the
 snapshot like this

 zfs rename -r zpool/fs at dump dump-old
 zfs snapshot -r zpool/fs at dump
 zfs send -R -I @dump-old zpool/fs at dump | zfs recv -vFd bakpool
 zfs destroy -r zpool/fs at dump-old
 zfs destroy -r bakpool/fs at dump-old

 (Notice that the increment starts at a snapshot called @dump-old on the
 send side but at a snapshot called @dump on the recv side. ZFS can
 handle this perfectly well, since it identifies snapshots by UUID, and
 will rename the bakpool snapshot as part of the recv.)

 This brings the filesystem on bakpool up to date with the filesystem on
 zpool, including all snapshots, but never creates an increment with more
 than one backup interval's worth of data in. If you want to keep more
 history on the backup pool than the source pool, you can hold off on
 destroying the old snapshots, and instead rename them to something
 unique. (Of course, you could always give them unique names to start
 with, but I find it more convenient not to.)
 Uh, I see a potential problem here.

 What if the zfs send | zfs recv command fails for some reason before
 completion?  I have noted that zfs recv is atomic -- if it fails for any
 reason the entire receive is rolled back like it never happened.

 But you then destroy the old snapshot, and the next time this runs the
 new gets rolled down.  It would appear that there's an increment
 missing, never to be seen again.
 No, if the recv fails my backup script aborts and doesn't delete the old
 snapshot. Cleanup then means removing the new snapshot and renaming the
 old back on the source zpool; in my case I do this by hand, but it could
 be automated given enough thought. (The names of the snapshots on the
 backup pool don't matter; they will be cleaned up by the next successful
 recv.)
I was concerned that if the one you rolled to old get killed without
the backup being successful then you're screwed as you've lost the
context.  I presume that zfs recv will properly set the exit code
non-zero if something's wrong (I would hope so!)
 What gets lost in that circumstance?  Anything changed between the two
 times -- and silently at that? (yikes!)
 It's impossible to recv an incremental stream on top of the wrong
 snapshot (identified by UUID, not by its current name), so nothing can
 get silently lost. A 'zfs recv -F' will find the correct starting
 snapshot on the destination filesystem (assuming it's there) regardless
 of its name, and roll forward to the state as of the end snapshot. If a
 recv succeeds you can be sure nothing up to that point has been missed.
Ah, ok.  THAT I did not understand.  So the zfs recv process checks what
it's about to apply the delta against, and if it can't find a consistent
place to start it garfs rather than screw you.  That's good.  As long as
it gets caught I can live with it.  Recovery isn't a terrible pain in
the butt so long as it CAN be recovered.  It's the potential for silent
failures that scare the bejeezus out of me for all the obvious reasons.
 The worst that can happen is if you mistakenly delete the snapshot on
 the source pool that marks the end of the last successful recv on the
 backup pool; in that case you have to take an increment from further
 back (which will therefore be a larger incremental stream than it needed
 to be). The very worst case is if you end up without any snapshots in
 common between the source and backup pools, and you have to start again
 with a full dump.

 Ben
Got it.

That's not great in that it could force a new full copy, but it's also
not the end of the world.  In my case I am already automatically taking
daily and 4-hour snaps, keeping a week's worth around, which is more
than enough time to be able to obtain a consistent place to go from. 
That should be ok then.

I think I'm going to play with this and see what I think of it.  One
thing that is very attractive to this design is to have the receiving
side be a mirror, then to rotate to the vault copy run a scrub (to
insure that both members are 

Re: Musings on ZFS Backup strategies

2013-03-02 Thread Phil Regnauld
Karl Denninger (karl) writes:
 
 I think I'm going to play with this and see what I think of it.  One
 thing that is very attractive to this design is to have the receiving
 side be a mirror, then to rotate to the vault copy run a scrub (to
 insure that both members are consistent at a checksum level), break the
 mirror and put one in the vault, replacing it with the drive coming FROM
 the vault, then do a zpool replace and allow it to resilver into the
 other drive.  You now have the two in consistent state again locally if
 the pool pukes and one in the vault in the event of a fire or other
 entire facility is toast event.

That's one solution.

 The only risk that makes me uncomfortable doing this is that the pool is
 always active when the system is running.  With UFS backup disks it's
 not -- except when being actually written to they're unmounted, and this
 materially decreases the risk of an insane adapter scribbling the
 drives, since there is no I/O at all going to them unless mounted. 
 While the backup pool would be nominally idle it is probably
 more-exposed to a potential scribble than the UFS-mounted packs would be.

Could zpool export in between syncs on the target, assuming that's not
your root pool :)

Cheers,
Phil
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Musings on ZFS Backup strategies

2013-03-02 Thread Ben Morrow
Quoth Phil Regnauld regna...@x0.dk:
 
  The only risk that makes me uncomfortable doing this is that the pool is
  always active when the system is running.  With UFS backup disks it's
  not -- except when being actually written to they're unmounted, and this
  materially decreases the risk of an insane adapter scribbling the
  drives, since there is no I/O at all going to them unless mounted. 
  While the backup pool would be nominally idle it is probably
  more-exposed to a potential scribble than the UFS-mounted packs would be.
 
   Could zpool export in between syncs on the target, assuming that's not
   your root pool :)

If I were feeling paranoid I might be tempted to not only keep the pool
exported when not in use, but to 'zpool offline' one half of the mirror
while performing the receive, then put it back online and allow it to
resilver before exporting the whole pool again. I'm not sure if there's
any way to wait for the resilver to finish except to poll 'zpool
status', though.

Ben

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org