Re: [ANNOUNCEMENT] pkg 1.3.0 out!

2014-07-24 Thread David Chisnall
Great news!

I've been running the 1.3 prereleases for a while, and aside from one hiccup in 
the early alphas, it's been a very pleasant experience.

Thanks to all involved,

David

On 23 Jul 2014, at 15:42, Baptiste Daroussin b...@freebsd.org wrote:

 Hi all,
 
 I'm very please to announce the release of pkg 1.3.0
 This version is the result of almost 9 month of hard work
 
 Here are the statistics for the version:
 - 373 files changed, 66973 insertions(+), 38512 deletions(-)
 - 29 different contributors
 
 Please not that for the first time I'm not the main contributor, and I would
 like to particularly thanks Vsevold Stakhov for all the hard work he has done 
 to
 allow us to get this release out. I would like also to give a special thanks 
 to
 Andrej Zverev for the tons of hours spending on testing and cleaning the bug
 tracker!
 
 So much has happened that it is hard to summarize so I'll try to highlight the
 major points:
 - New solver, now pkg has a real SAT solver able to automatically handle
  conflicts and dynamically discover them. (yes pkg set -o is deprecated now)
 - pkg install now able to install local files as well and resolve their
  dependencies from the remote repositories
 - Lots of parts of the code has been sandboxed
 - Lots of rework to improve portability
 - Package installation process has been reworked to be safer and handle 
 properly
  the schg flags
 - Important modification of the locking system for finer grain locks
 - Massive usage of libucl
 - Simplification of the API
 - Lots of improvements on the UI to provide a better user experience.
 - Lots of improvements in multi repository mode
 - pkg audit code has been moved into the library
 - pkg -o A=B that will overwrite configuration file from cli
 - The ui now support long options
 - The unicity of a package is not anymore origin
 - Tons of bug fixes
 - Tons of behaviours fixes
 - Way more!
 
 Thank you to all contributors:
 Alberto Villa, Alexandre Perrin, Andrej Zverev, Antoine Brodin, Brad Davis,
 Bryan Drewery, Dag-Erling Smørgrav, Dmitry Marakasov, Elvira Khabirova, Jamie
 Landeg Jones, Jilles Tjoelker, John Marino, Julien Laffaye, Mathieu Arnold,
 Matthew Seaman, Maximilian Gaß, Michael Gehring, Michael Gmelin, Nicolas 
 Szalay,
 Rodrigo Osorio, Roman Naumann, Rui Paulo, Sean Channel, Stanislav E. Putrya,
 Vsevolod Stakhov, Xin Li, coctic
 
 Regards,
 Bapt on behalf of the pkg@

___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman
TRYING to use zfs send/recv between a 10-STABLE and an 11-CURRENT 
system, and receive the non-descript

invalid backup stream.

borg.lerctr.org /home/ler $ sudo bin/backup-TBH-ZFS-initial.sh
Password:
receiving full stream of zroot@2014-07-24 into 
zroot/backups/TBH@2014-07-24

received 41.7KB stream in 300 seconds (142B/sec)
receiving full stream of zroot/usr@2014-07-24 into 
zroot/backups/TBH/usr@2014-07-24

received 41.7KB stream in 1 seconds (41.7KB/sec)
receiving full stream of zroot/usr/local@2014-07-24 into 
zroot/backups/TBH/usr/local@2014-07-24

received 2.81GB stream in 1116 seconds (2.58MB/sec)
receiving full stream of zroot/usr/src@2014-07-24 into 
zroot/backups/TBH/usr/src@2014-07-24

cannot receive new filesystem stream: invalid backup stream
borg.lerctr.org /home/ler $ cat bin/backup-TBH-ZFS-initial.sh
#!/bin/sh
DATE=`date +%Y-%m-%d`
#DATE2=2013-03-24
#DATE2=`date -v -1d +%Y-%m-%d`
# snap the source
ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
# zfs copy the source to here.
ssh r...@tbh.lerctr.org zfs send  -R -D  zroot@${DATE} | \
 ssh home.lerctr.org \zfs recv -F -u -v -d zroot/backups/TBH\
# make sure we NEVER allow the backup stuff to automount.
/sbin/zfs list -H -t filesystem -r zroot/backups/TBH| \
awk '{printf /sbin/zfs set canmount=noauto %s\n,$1}' | sh
borg.lerctr.org /home/ler $

This has been happening for YEARS and I can't seem to interest anyone in 
fixing it.


How can we get to the bottom of this?

borg.lerctr.org /home/ler $ uname -a
FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #56 r268982M: 
Tue Jul 22 10:14:59 CDT 2014 
r...@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER  amd64

borg.lerctr.org /home/ler $ ssh tbh uname -a
FreeBSD thebighonker.lerctr.org 10.0-STABLE FreeBSD 10.0-STABLE #39 
r269019M: Wed Jul 23 11:44:35 CDT 2014 
r...@thebighonker.lerctr.org:/usr/obj/usr/src/sys/GENERIC  amd64

borg.lerctr.org /home/ler $

--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Allan Jude
On 2014-07-24 13:33, Larry Rosenman wrote:
 TRYING to use zfs send/recv between a 10-STABLE and an 11-CURRENT
 system, and receive the non-descript
 invalid backup stream.
 
 borg.lerctr.org /home/ler $ sudo bin/backup-TBH-ZFS-initial.sh
 Password:
 receiving full stream of zroot@2014-07-24 into zroot/backups/TBH@2014-07-24
 received 41.7KB stream in 300 seconds (142B/sec)
 receiving full stream of zroot/usr@2014-07-24 into
 zroot/backups/TBH/usr@2014-07-24
 received 41.7KB stream in 1 seconds (41.7KB/sec)
 receiving full stream of zroot/usr/local@2014-07-24 into
 zroot/backups/TBH/usr/local@2014-07-24
 received 2.81GB stream in 1116 seconds (2.58MB/sec)
 receiving full stream of zroot/usr/src@2014-07-24 into
 zroot/backups/TBH/usr/src@2014-07-24
 cannot receive new filesystem stream: invalid backup stream
 borg.lerctr.org /home/ler $ cat bin/backup-TBH-ZFS-initial.sh
 #!/bin/sh
 DATE=`date +%Y-%m-%d`
 #DATE2=2013-03-24
 #DATE2=`date -v -1d +%Y-%m-%d`
 # snap the source
 ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
 # zfs copy the source to here.
 ssh r...@tbh.lerctr.org zfs send  -R -D  zroot@${DATE} | \
  ssh home.lerctr.org \zfs recv -F -u -v -d zroot/backups/TBH\
 # make sure we NEVER allow the backup stuff to automount.
 /sbin/zfs list -H -t filesystem -r zroot/backups/TBH| \
 awk '{printf /sbin/zfs set canmount=noauto %s\n,$1}' | sh
 borg.lerctr.org /home/ler $
 
 This has been happening for YEARS and I can't seem to interest anyone in
 fixing it.
 
 How can we get to the bottom of this?
 
 borg.lerctr.org /home/ler $ uname -a
 FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #56 r268982M:
 Tue Jul 22 10:14:59 CDT 2014
 r...@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER  amd64
 borg.lerctr.org /home/ler $ ssh tbh uname -a
 FreeBSD thebighonker.lerctr.org 10.0-STABLE FreeBSD 10.0-STABLE #39
 r269019M: Wed Jul 23 11:44:35 CDT 2014
 r...@thebighonker.lerctr.org:/usr/obj/usr/src/sys/GENERIC  amd64
 borg.lerctr.org /home/ler $
 

Try adding -v to the 'zfs send' and see if it gives you more detail.

Can you also try this script for the replication:

http://github.com/allanjude/zxfer



-- 
Allan Jude



signature.asc
Description: OpenPGP digital signature


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 12:38, Allan Jude wrote:

On 2014-07-24 13:33, Larry Rosenman wrote:

TRYING to use zfs send/recv between a 10-STABLE and an 11-CURRENT
system, and receive the non-descript
invalid backup stream.

borg.lerctr.org /home/ler $ sudo bin/backup-TBH-ZFS-initial.sh
Password:
receiving full stream of zroot@2014-07-24 into 
zroot/backups/TBH@2014-07-24

received 41.7KB stream in 300 seconds (142B/sec)
receiving full stream of zroot/usr@2014-07-24 into
zroot/backups/TBH/usr@2014-07-24
received 41.7KB stream in 1 seconds (41.7KB/sec)
receiving full stream of zroot/usr/local@2014-07-24 into
zroot/backups/TBH/usr/local@2014-07-24
received 2.81GB stream in 1116 seconds (2.58MB/sec)
receiving full stream of zroot/usr/src@2014-07-24 into
zroot/backups/TBH/usr/src@2014-07-24
cannot receive new filesystem stream: invalid backup stream
borg.lerctr.org /home/ler $ cat bin/backup-TBH-ZFS-initial.sh
#!/bin/sh
DATE=`date +%Y-%m-%d`
#DATE2=2013-03-24
#DATE2=`date -v -1d +%Y-%m-%d`
# snap the source
ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
# zfs copy the source to here.
ssh r...@tbh.lerctr.org zfs send  -R -D  zroot@${DATE} | \
 ssh home.lerctr.org \zfs recv -F -u -v -d zroot/backups/TBH\
# make sure we NEVER allow the backup stuff to automount.
/sbin/zfs list -H -t filesystem -r zroot/backups/TBH| \
awk '{printf /sbin/zfs set canmount=noauto %s\n,$1}' | sh
borg.lerctr.org /home/ler $

This has been happening for YEARS and I can't seem to interest anyone 
in

fixing it.

How can we get to the bottom of this?

borg.lerctr.org /home/ler $ uname -a
FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #56 
r268982M:

Tue Jul 22 10:14:59 CDT 2014
r...@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER  amd64
borg.lerctr.org /home/ler $ ssh tbh uname -a
FreeBSD thebighonker.lerctr.org 10.0-STABLE FreeBSD 10.0-STABLE #39
r269019M: Wed Jul 23 11:44:35 CDT 2014
r...@thebighonker.lerctr.org:/usr/obj/usr/src/sys/GENERIC  amd64
borg.lerctr.org /home/ler $



Try adding -v to the 'zfs send' and see if it gives you more detail.

Can you also try this script for the replication:

http://github.com/allanjude/zxfer

I've done that in the past and nothing, but I will try again.

I will also look at zxfer :)

--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 12:43, Larry Rosenman wrote:

On 2014-07-24 12:38, Allan Jude wrote:

On 2014-07-24 13:33, Larry Rosenman wrote:

TRYING to use zfs send/recv between a 10-STABLE and an 11-CURRENT
system, and receive the non-descript
invalid backup stream.

borg.lerctr.org /home/ler $ sudo bin/backup-TBH-ZFS-initial.sh
Password:
receiving full stream of zroot@2014-07-24 into 
zroot/backups/TBH@2014-07-24

received 41.7KB stream in 300 seconds (142B/sec)
receiving full stream of zroot/usr@2014-07-24 into
zroot/backups/TBH/usr@2014-07-24
received 41.7KB stream in 1 seconds (41.7KB/sec)
receiving full stream of zroot/usr/local@2014-07-24 into
zroot/backups/TBH/usr/local@2014-07-24
received 2.81GB stream in 1116 seconds (2.58MB/sec)
receiving full stream of zroot/usr/src@2014-07-24 into
zroot/backups/TBH/usr/src@2014-07-24
cannot receive new filesystem stream: invalid backup stream
borg.lerctr.org /home/ler $ cat bin/backup-TBH-ZFS-initial.sh
#!/bin/sh
DATE=`date +%Y-%m-%d`
#DATE2=2013-03-24
#DATE2=`date -v -1d +%Y-%m-%d`
# snap the source
ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
# zfs copy the source to here.
ssh r...@tbh.lerctr.org zfs send  -R -D  zroot@${DATE} | \
 ssh home.lerctr.org \zfs recv -F -u -v -d zroot/backups/TBH\
# make sure we NEVER allow the backup stuff to automount.
/sbin/zfs list -H -t filesystem -r zroot/backups/TBH| \
awk '{printf /sbin/zfs set canmount=noauto %s\n,$1}' | sh
borg.lerctr.org /home/ler $

This has been happening for YEARS and I can't seem to interest anyone 
in

fixing it.

How can we get to the bottom of this?

borg.lerctr.org /home/ler $ uname -a
FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #56 
r268982M:

Tue Jul 22 10:14:59 CDT 2014
r...@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER  amd64
borg.lerctr.org /home/ler $ ssh tbh uname -a
FreeBSD thebighonker.lerctr.org 10.0-STABLE FreeBSD 10.0-STABLE #39
r269019M: Wed Jul 23 11:44:35 CDT 2014
r...@thebighonker.lerctr.org:/usr/obj/usr/src/sys/GENERIC  amd64
borg.lerctr.org /home/ler $



Try adding -v to the 'zfs send' and see if it gives you more detail.

Can you also try this script for the replication:

http://github.com/allanjude/zxfer

I've done that in the past and nothing, but I will try again.

I will also look at zxfer :)



with the -v, no more info, just what looks like normal messages

13:23:55   3.68G   zroot/usr/src@2014-07-24
13:23:56   3.68G   zroot/usr/src@2014-07-24
13:23:57   3.68G   zroot/usr/src@2014-07-24
13:23:58   3.68G   zroot/usr/src@2014-07-24
13:23:59   3.68G   zroot/usr/src@2014-07-24
13:24:00   3.69G   zroot/usr/src@2014-07-24
13:24:01   3.69G   zroot/usr/src@2014-07-24
13:24:02   3.69G   zroot/usr/src@2014-07-24
13:24:03   3.69G   zroot/usr/src@2014-07-24
13:24:04   3.69G   zroot/usr/src@2014-07-24
13:24:05   3.70G   zroot/usr/src@2014-07-24
13:24:06   3.70G   zroot/usr/src@2014-07-24
13:24:07   3.70G   zroot/usr/src@2014-07-24
13:24:08   3.70G   zroot/usr/src@2014-07-24
13:24:09   3.70G   zroot/usr/src@2014-07-24
13:24:10   3.71G   zroot/usr/src@2014-07-24
13:24:11   3.71G   zroot/usr/src@2014-07-24
cannot receive new filesystem stream: invalid backup stream
borg.lerctr.org /home/ler/bin $
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


FreeBSD Quarterly Status Report - Second Quarter 2014

2014-07-24 Thread Glen Barber
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

FreeBSD Project Quarterly Status Report: April - June 2014

   This report covers FreeBSD-related projects between April and June
   2014. This is the second of four reports planned for 2014.

   The second quarter of 2014 was a very busy and productive time for the
   FreeBSD Project. A new FreeBSD Core Team was elected, the FreeBSD Ports
   Management Team branched the second quarterly stable branch, the
   FreeBSD Release Engineering Team was in the process of finalizing the
   FreeBSD 9.3-RELEASE cycle, and many exciting new features have been
   added to FreeBSD.

   Thanks to all the reporters for the excellent work! This report
   contains 24 entries and we hope you enjoy reading it.

   The deadline for submissions covering the period from July to September
   2014 is October 7th, 2014.
 __

FreeBSD Team Reports

 * FreeBSD Core Team
 * FreeBSD Port Management Team
 * FreeBSD Release Engineering Team

Projects

 * Chelsio iSCSI Offload Support
 * CUSE4BSD
 * FreeBSD and Summer of Code 2014
 * New Automounter
 * pkg(8)
 * QEMU bsd-user-Enabled Ports Building
 * RPC/NFS and CTL/iSCSI Performance Optimizations
 * ZFSguru

Kernel

 * PostgreSQL Performance Improvements
 * Running FreeBSD as an Application on Top of the Fiasco.OC
   Microkernel
 * SDIO Driver
 * TMPFS Stability
 * UEFI Boot
 * Updated vt(4) System Console

Architectures

 * FreeBSD/arm64

Ports

 * FreeBSD Python Ports
 * KDE/FreeBSD
 * The Graphics Stack on FreeBSD

Documentation

 * Quarterly Status Reports

Miscellaneous

 * FreeBSD Host Support for OpenStack and OpenContrail
 * The FreeBSD Foundation
 __

FreeBSD Core Team

   Contact: FreeBSD Core Team c...@freebsd.org

   The FreeBSD Core Team constitutes the project's Board of Directors,
   responsible for deciding the project's overall goals and direction as
   well as managing specific areas of the FreeBSD project landscape.

   Topics for core this quarter have included some far-reaching policy
   reviews and some significant changes to the project development
   methodology.

   In May, a new release policy was published and presented at the BSDCan
   developer conference by John Baldwin. The idea is that each major
   release branch (for example, 10.X) is guaranteed to be supported for at
   least five years, but individual point releases on each branch, like
   10.0-RELEASE, will be issued at regular intervals and only the latest
   point release will be supported.

   Another significant change did not receive approval. When the change to
   the Bylaws reforming the core team election process was put to the vote
   of all FreeBSD developers, it failed to reach a quorum.

   June saw the culmination of a long running project to replace the
   project's bug tracking system. As of June 3, the FreeBSD project has
   switched to Bugzilla as its bug tracking system. All of the history of
   GNATS PRs has been preserved, so there is no need to re-open old
   tickets. Work is still going on to replicate some of the integration
   tweaks that had been applied to GNATS, but all necessary functionality
   has been implemented and the project is already seeing the benefits of
   the new capabilities brought by Bugzilla.

   An election to select core members for the next two year term of office
   took place during this period. We would like to thank retiring members
   of core for their years of service. The new core team provides
   continuity with previous core teams: about half are incumbents from the
   previous team, and several former core team members have returned after
   a hiatus. Core now includes two members of the FreeBSD Foundation board
   and one other Foundation staff member, aiding greater coordination at
   the top level of the project. At the same time the core-secretary role
   was passed on to a new volunteer.

   Other activities included providing consultation on licensing terms for
   software within the FreeBSD source tree, and oversight of changes to
   the membership of postmaster and clusteradm.

   Three new src commit bits were issued during this quarter, and one was
   taken into safekeeping.
 __

FreeBSD Port Management Team

   URL: http://www.FreeBSD.org/ports/
   URL: http://www.freebsd.org/doc/en_US.ISO8859-1/articles/contributing-ports/
   URL: http://portsmon.freebsd.org/index.html
   URL: http://www.freebsd.org/portmgr/index.html
   URL: http://blogs.freebsdish.org/portmgr/
   URL: http://www.twitter.com/freebsd_portmgr/
   URL: http://www.facebook.com/portmgr
   URL: http://plus.google.com/communities/108335846196454338383

   Contact: Frederic Culot portmgr-secret...@freebsd.org
   Contact: 

Re: Future of pf / firewall in FreeBSD ? - does it have one ?

2014-07-24 Thread Mark Felder

 On Jul 23, 2014, at 15:59, Bjoern A. Zeeb bzeeb-li...@lists.zabbadoz.net 
 wrote:
 
 There was (is?) another case that in certain situations with certain pf 
 options IPv6/ULP packets would not pass or get corrupted.  I think no one who 
 experienced it never tracked it down to the code but I am sure there are PRs 
 for this;  best bet is that not all header sizes are equal and length/offsets 
 into IPv6 packets are different to IPv4, especially when you scrub.
 

scrub reassemble tcp breaks all ipv6 tcp traffic since FreeBSD 9.0. Well, not 
entirely breaks but things seem to be going at a rate of a poor dialup 
connection. This is similar to what I've experienced with pf + tso on Xen. 
Related? Possibly! I'd hazard a guess the reassembling of tcp on IPv6 is 
breaking checksums?

Upstream pf from OpenBSD has removed this feature entirely and (I believe) 
reworked their scrubbing, but I don't know the details. I can confirm that when 
reassemble tcp existed on OpenBSD it never broke traffic for me.

Synproxy and IPv6 was also broken last I knew. I can't remember the symptoms, 
but it was probably nothing works. I recall synproxy has always been one of 
those you're gonna shoot your eye out kid features, but some people have used 
it successfully.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Allan Jude
On 2014-07-24 14:25, Larry Rosenman wrote:
 On 2014-07-24 12:43, Larry Rosenman wrote:
 On 2014-07-24 12:38, Allan Jude wrote:
 On 2014-07-24 13:33, Larry Rosenman wrote:
 TRYING to use zfs send/recv between a 10-STABLE and an 11-CURRENT
 system, and receive the non-descript
 invalid backup stream.

 borg.lerctr.org /home/ler $ sudo bin/backup-TBH-ZFS-initial.sh
 Password:
 receiving full stream of zroot@2014-07-24 into
 zroot/backups/TBH@2014-07-24
 received 41.7KB stream in 300 seconds (142B/sec)
 receiving full stream of zroot/usr@2014-07-24 into
 zroot/backups/TBH/usr@2014-07-24
 received 41.7KB stream in 1 seconds (41.7KB/sec)
 receiving full stream of zroot/usr/local@2014-07-24 into
 zroot/backups/TBH/usr/local@2014-07-24
 received 2.81GB stream in 1116 seconds (2.58MB/sec)
 receiving full stream of zroot/usr/src@2014-07-24 into
 zroot/backups/TBH/usr/src@2014-07-24
 cannot receive new filesystem stream: invalid backup stream
 borg.lerctr.org /home/ler $ cat bin/backup-TBH-ZFS-initial.sh
 #!/bin/sh
 DATE=`date +%Y-%m-%d`
 #DATE2=2013-03-24
 #DATE2=`date -v -1d +%Y-%m-%d`
 # snap the source
 ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
 # zfs copy the source to here.
 ssh r...@tbh.lerctr.org zfs send  -R -D  zroot@${DATE} | \
  ssh home.lerctr.org \zfs recv -F -u -v -d zroot/backups/TBH\
 # make sure we NEVER allow the backup stuff to automount.
 /sbin/zfs list -H -t filesystem -r zroot/backups/TBH| \
 awk '{printf /sbin/zfs set canmount=noauto %s\n,$1}' | sh
 borg.lerctr.org /home/ler $

 This has been happening for YEARS and I can't seem to interest
 anyone in
 fixing it.

 How can we get to the bottom of this?

 borg.lerctr.org /home/ler $ uname -a
 FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #56 r268982M:
 Tue Jul 22 10:14:59 CDT 2014
 r...@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER  amd64
 borg.lerctr.org /home/ler $ ssh tbh uname -a
 FreeBSD thebighonker.lerctr.org 10.0-STABLE FreeBSD 10.0-STABLE #39
 r269019M: Wed Jul 23 11:44:35 CDT 2014
 r...@thebighonker.lerctr.org:/usr/obj/usr/src/sys/GENERIC  amd64
 borg.lerctr.org /home/ler $


 Try adding -v to the 'zfs send' and see if it gives you more detail.

 Can you also try this script for the replication:

 http://github.com/allanjude/zxfer
 I've done that in the past and nothing, but I will try again.

 I will also look at zxfer :)
 
 
 with the -v, no more info, just what looks like normal messages
 
 13:23:55   3.68G   zroot/usr/src@2014-07-24
 13:23:56   3.68G   zroot/usr/src@2014-07-24
 13:23:57   3.68G   zroot/usr/src@2014-07-24
 13:23:58   3.68G   zroot/usr/src@2014-07-24
 13:23:59   3.68G   zroot/usr/src@2014-07-24
 13:24:00   3.69G   zroot/usr/src@2014-07-24
 13:24:01   3.69G   zroot/usr/src@2014-07-24
 13:24:02   3.69G   zroot/usr/src@2014-07-24
 13:24:03   3.69G   zroot/usr/src@2014-07-24
 13:24:04   3.69G   zroot/usr/src@2014-07-24
 13:24:05   3.70G   zroot/usr/src@2014-07-24
 13:24:06   3.70G   zroot/usr/src@2014-07-24
 13:24:07   3.70G   zroot/usr/src@2014-07-24
 13:24:08   3.70G   zroot/usr/src@2014-07-24
 13:24:09   3.70G   zroot/usr/src@2014-07-24
 13:24:10   3.71G   zroot/usr/src@2014-07-24
 13:24:11   3.71G   zroot/usr/src@2014-07-24
 cannot receive new filesystem stream: invalid backup stream
 borg.lerctr.org /home/ler/bin $

I notice you are doing a deduplicated stream. Does it work without
deduplication (zfs send -D)?

-- 
Allan Jude



signature.asc
Description: OpenPGP digital signature


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 13:45, Allan Jude wrote:

On 2014-07-24 14:25, Larry Rosenman wrote:

On 2014-07-24 12:43, Larry Rosenman wrote:

On 2014-07-24 12:38, Allan Jude wrote:

On 2014-07-24 13:33, Larry Rosenman wrote:

TRYING to use zfs send/recv between a 10-STABLE and an 11-CURRENT
system, and receive the non-descript
invalid backup stream.

borg.lerctr.org /home/ler $ sudo bin/backup-TBH-ZFS-initial.sh
Password:
receiving full stream of zroot@2014-07-24 into
zroot/backups/TBH@2014-07-24
received 41.7KB stream in 300 seconds (142B/sec)
receiving full stream of zroot/usr@2014-07-24 into
zroot/backups/TBH/usr@2014-07-24
received 41.7KB stream in 1 seconds (41.7KB/sec)
receiving full stream of zroot/usr/local@2014-07-24 into
zroot/backups/TBH/usr/local@2014-07-24
received 2.81GB stream in 1116 seconds (2.58MB/sec)
receiving full stream of zroot/usr/src@2014-07-24 into
zroot/backups/TBH/usr/src@2014-07-24
cannot receive new filesystem stream: invalid backup stream
borg.lerctr.org /home/ler $ cat bin/backup-TBH-ZFS-initial.sh
#!/bin/sh
DATE=`date +%Y-%m-%d`
#DATE2=2013-03-24
#DATE2=`date -v -1d +%Y-%m-%d`
# snap the source
ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
# zfs copy the source to here.
ssh r...@tbh.lerctr.org zfs send  -R -D  zroot@${DATE} | \
 ssh home.lerctr.org \zfs recv -F -u -v -d 
zroot/backups/TBH\

# make sure we NEVER allow the backup stuff to automount.
/sbin/zfs list -H -t filesystem -r zroot/backups/TBH| \
awk '{printf /sbin/zfs set canmount=noauto %s\n,$1}' | sh
borg.lerctr.org /home/ler $

This has been happening for YEARS and I can't seem to interest
anyone in
fixing it.

How can we get to the bottom of this?

borg.lerctr.org /home/ler $ uname -a
FreeBSD borg.lerctr.org 11.0-CURRENT FreeBSD 11.0-CURRENT #56 
r268982M:

Tue Jul 22 10:14:59 CDT 2014
r...@borg.lerctr.org:/usr/obj/usr/src/sys/VT-LER  amd64
borg.lerctr.org /home/ler $ ssh tbh uname -a
FreeBSD thebighonker.lerctr.org 10.0-STABLE FreeBSD 10.0-STABLE #39
r269019M: Wed Jul 23 11:44:35 CDT 2014
r...@thebighonker.lerctr.org:/usr/obj/usr/src/sys/GENERIC  amd64
borg.lerctr.org /home/ler $



Try adding -v to the 'zfs send' and see if it gives you more detail.

Can you also try this script for the replication:

http://github.com/allanjude/zxfer

I've done that in the past and nothing, but I will try again.

I will also look at zxfer :)



with the -v, no more info, just what looks like normal messages

13:23:55   3.68G   zroot/usr/src@2014-07-24
13:23:56   3.68G   zroot/usr/src@2014-07-24
13:23:57   3.68G   zroot/usr/src@2014-07-24
13:23:58   3.68G   zroot/usr/src@2014-07-24
13:23:59   3.68G   zroot/usr/src@2014-07-24
13:24:00   3.69G   zroot/usr/src@2014-07-24
13:24:01   3.69G   zroot/usr/src@2014-07-24
13:24:02   3.69G   zroot/usr/src@2014-07-24
13:24:03   3.69G   zroot/usr/src@2014-07-24
13:24:04   3.69G   zroot/usr/src@2014-07-24
13:24:05   3.70G   zroot/usr/src@2014-07-24
13:24:06   3.70G   zroot/usr/src@2014-07-24
13:24:07   3.70G   zroot/usr/src@2014-07-24
13:24:08   3.70G   zroot/usr/src@2014-07-24
13:24:09   3.70G   zroot/usr/src@2014-07-24
13:24:10   3.71G   zroot/usr/src@2014-07-24
13:24:11   3.71G   zroot/usr/src@2014-07-24
cannot receive new filesystem stream: invalid backup stream
borg.lerctr.org /home/ler/bin $


I notice you are doing a deduplicated stream. Does it work without
deduplication (zfs send -D)?
I will try that after this zxfer test I'm running finishes, but IIRC it 
doesn't matter..

--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: Future of pf / firewall in FreeBSD ? - does it have one ?

2014-07-24 Thread Mark Felder

 On Jul 24, 2014, at 13:43, Mark Felder f...@freebsd.org wrote:
 
 Upstream pf from OpenBSD has removed this feature entirely and (I believe) 
 reworked their scrubbing, but I don't know the details. I can confirm that 
 when reassemble tcp existed on OpenBSD it never broke traffic for me.
 


I'm wrong; reassemble tcp still exists upstream. I must be thinking of 
something else that has since been removed but exists in our version. Oh well.
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 13:46, Larry Rosenman wrote:


I notice you are doing a deduplicated stream. Does it work without
deduplication (zfs send -D)?

I will try that after this zxfer test I'm running finishes, but IIRC
it doesn't matter..


borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O r...@tbh.lerctr.org 
-R zroot  zroot/backups/TBH

Creating recursive snapshot zroot@zxfer_26699_20140724135840.
Checking grandfather status of all snapshots marked for deletion...
Grandfather check passed.
Sending zroot@zxfer_26699_20140724135840 to zroot/backups/TBH/zroot.
Sending zroot/ROOT@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/ROOT.
Sending zroot/ROOT/default@zxfer_23699_20140724134435 to 
zroot/backups/TBH/zroot/ROOT/default.
Sending zroot/ROOT/default@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/ROOT/default.

  (incremental to zroot/ROOT/default@zxfer_23699_20140724134435.)
Sending zroot/home@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/home.

Write failed: Cannot allocate memory
cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different...


--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Mark Martinec

2014-07-24 21:31, Larry Rosenman wrote:

borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O
r...@tbh.lerctr.org -R zroot  zroot/backups/TBH
Creating recursive snapshot zroot@zxfer_26699_20140724135840.
Checking grandfather status of all snapshots marked for deletion...
Grandfather check passed.
Sending zroot@zxfer_26699_20140724135840 to zroot/backups/TBH/zroot.
Sending zroot/ROOT@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/ROOT.

Sending zroot/ROOT/default@zxfer_23699_20140724134435 to
zroot/backups/TBH/zroot/ROOT/default.
Sending zroot/ROOT/default@zxfer_26699_20140724135840 to
zroot/backups/TBH/zroot/ROOT/default.
  (incremental to zroot/ROOT/default@zxfer_23699_20140724134435.)
Sending zroot/home@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/home.



Write failed: Cannot allocate memory

  


cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different...


Sounds familiar, check my posting of today and links therein:

  http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html

Mark
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 14:53, Mark Martinec wrote:

2014-07-24 21:31, Larry Rosenman wrote:

borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O
r...@tbh.lerctr.org -R zroot  zroot/backups/TBH
Creating recursive snapshot zroot@zxfer_26699_20140724135840.
Checking grandfather status of all snapshots marked for deletion...
Grandfather check passed.
Sending zroot@zxfer_26699_20140724135840 to zroot/backups/TBH/zroot.
Sending zroot/ROOT@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/ROOT.

Sending zroot/ROOT/default@zxfer_23699_20140724134435 to
zroot/backups/TBH/zroot/ROOT/default.
Sending zroot/ROOT/default@zxfer_26699_20140724135840 to
zroot/backups/TBH/zroot/ROOT/default.
  (incremental to zroot/ROOT/default@zxfer_23699_20140724134435.)
Sending zroot/home@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/home.



Write failed: Cannot allocate memory

  


cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different...


Sounds familiar, check my posting of today and links therein:

  http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html

Mark

I'm not using netgraph to the best of my knowledge
and the only fails on the SENDING host are:
8 Bucket:64,  0,  41,3555,  257774,  11,   0
12 Bucket:   96,  0,  96,2569,  123653,   0,   0
16 Bucket:  128,  0,   17195, 506,  215573,   0,   0
32 Bucket:  256,  0, 340,4670,  900638,  50,   0
64 Bucket:  512,  0,   10691, 365,  546888,185232,   
0

128 Bucket:1024,  0,3563, 905,  348419,   0,   0
256 Bucket:2048,  0,2872, 162,  249995,59834,   
0

vmem btag:   56,  0,  192811,   51500,  502264,1723,   0


--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Allan Jude
On 2014-07-24 15:57, Larry Rosenman wrote:
 On 2014-07-24 14:53, Mark Martinec wrote:
 2014-07-24 21:31, Larry Rosenman wrote:
 borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O
 r...@tbh.lerctr.org -R zroot  zroot/backups/TBH
 Creating recursive snapshot zroot@zxfer_26699_20140724135840.
 Checking grandfather status of all snapshots marked for deletion...
 Grandfather check passed.
 Sending zroot@zxfer_26699_20140724135840 to zroot/backups/TBH/zroot.
 Sending zroot/ROOT@zxfer_26699_20140724135840 to
 zroot/backups/TBH/zroot/ROOT.
 Sending zroot/ROOT/default@zxfer_23699_20140724134435 to
 zroot/backups/TBH/zroot/ROOT/default.
 Sending zroot/ROOT/default@zxfer_26699_20140724135840 to
 zroot/backups/TBH/zroot/ROOT/default.
   (incremental to zroot/ROOT/default@zxfer_23699_20140724134435.)
 Sending zroot/home@zxfer_26699_20140724135840 to
 zroot/backups/TBH/zroot/home.

 Write failed: Cannot allocate memory
   

 cannot receive new filesystem stream: invalid backup stream
 Error when zfs send/receiving.
 borg.lerctr.org /home/ler #

 well that's different...

 Sounds familiar, check my posting of today and links therein:

   http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html

 Mark
 I'm not using netgraph to the best of my knowledge
 and the only fails on the SENDING host are:
 8 Bucket:64,  0,  41,3555,  257774,  11,   0
 12 Bucket:   96,  0,  96,2569,  123653,   0,   0
 16 Bucket:  128,  0,   17195, 506,  215573,   0,   0
 32 Bucket:  256,  0, 340,4670,  900638,  50,   0
 64 Bucket:  512,  0,   10691, 365,  546888,185232,   0
 128 Bucket:1024,  0,3563, 905,  348419,   0,   0
 256 Bucket:2048,  0,2872, 162,  249995,59834,   0
 vmem btag:   56,  0,  192811,   51500,  502264,1723,   0
 
 

I regularly use zxfer to transfer 500+ GiB datasets over the internet.
This week I actually replicated a 2.1 TiB dataset with zxfer without issue.

I wonder which thing is running out of memory. Is there a delay while it
is 'running out of memory', or does it fail immediately? Does running
top while it is working on running out of memory reveal anything?

I would expect to use up a lot of memory while doing deduplication, but
not otherwise.

Note: I most often use openssh-portable rather than base ssh for
replication, as I enable the nonecipher to reduce CPU usage, and adjust
the TcpRcvBuf upwards to actually saturate a gigabit over the internet.

-- 
Allan Jude



signature.asc
Description: OpenPGP digital signature


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 15:07, Allan Jude wrote:

On 2014-07-24 15:57, Larry Rosenman wrote:

On 2014-07-24 14:53, Mark Martinec wrote:

2014-07-24 21:31, Larry Rosenman wrote:

borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O
r...@tbh.lerctr.org -R zroot  zroot/backups/TBH
Creating recursive snapshot zroot@zxfer_26699_20140724135840.
Checking grandfather status of all snapshots marked for deletion...
Grandfather check passed.
Sending zroot@zxfer_26699_20140724135840 to zroot/backups/TBH/zroot.
Sending zroot/ROOT@zxfer_26699_20140724135840 to
zroot/backups/TBH/zroot/ROOT.
Sending zroot/ROOT/default@zxfer_23699_20140724134435 to
zroot/backups/TBH/zroot/ROOT/default.
Sending zroot/ROOT/default@zxfer_26699_20140724135840 to
zroot/backups/TBH/zroot/ROOT/default.
  (incremental to zroot/ROOT/default@zxfer_23699_20140724134435.)
Sending zroot/home@zxfer_26699_20140724135840 to
zroot/backups/TBH/zroot/home.



Write failed: Cannot allocate memory

  


cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different...


Sounds familiar, check my posting of today and links therein:

  
http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html


Mark

I'm not using netgraph to the best of my knowledge
and the only fails on the SENDING host are:
8 Bucket:64,  0,  41,3555,  257774,  11,   
0
12 Bucket:   96,  0,  96,2569,  123653,   0,   
0
16 Bucket:  128,  0,   17195, 506,  215573,   0,   
0
32 Bucket:  256,  0, 340,4670,  900638,  50,   
0
64 Bucket:  512,  0,   10691, 365,  546888,185232, 
  0
128 Bucket:1024,  0,3563, 905,  348419,   0,   
0
256 Bucket:2048,  0,2872, 162,  249995,59834,  
 0
vmem btag:   56,  0,  192811,   51500,  502264,1723,   
0





I regularly use zxfer to transfer 500+ GiB datasets over the internet.
This week I actually replicated a 2.1 TiB dataset with zxfer without 
issue.


I wonder which thing is running out of memory. Is there a delay while 
it

is 'running out of memory', or does it fail immediately? Does running
top while it is working on running out of memory reveal anything?

I would expect to use up a lot of memory while doing deduplication, but
not otherwise.

Note: I most often use openssh-portable rather than base ssh for
replication, as I enable the nonecipher to reduce CPU usage, and adjust
the TcpRcvBuf upwards to actually saturate a gigabit over the internet.


I wasn't watching exactly what it was doing, but the sending box has 16G 
and 18G Swap and swap

has NOT been touched.

last pid: 74288;  load averages:  4.70,  5.61,  5.91up 1+03:14:18  
15:10:44

115 processes: 3 running, 112 sleeping
CPU:  0.6% user, 33.3% nice,  0.6% system,  0.1% interrupt, 65.4% idle
Mem: 847M Active, 761M Inact, 14G Wired, 4616K Cache, 357M Free
ARC: 12G Total, 6028M MFU, 5281M MRU, 3152K Anon, 120M Header, 688M 
Other

Swap: 18G Total, 18G Free

so I have zero idea where to go here.


--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Mark Martinec

2014-07-24 21:57, Larry Rosenman wrote:

Sending zroot/home@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/home.

Write failed: Cannot allocate memory

  

cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different...


Sounds familiar, check my posting of today and links therein:
  http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html



I'm not using netgraph to the best of my knowledge


Check to make sure:
  vmstat -z | fgrep NetGraph


and the only fails on the SENDING host are:
8 Bucket:64,  0,  41,3555,  257774,  11,   
0
12 Bucket:   96,  0,  96,2569,  123653,   0,   
0
16 Bucket:  128,  0,   17195, 506,  215573,   0,   
0
32 Bucket:  256,  0, 340,4670,  900638,  50,   
0

64 Bucket:  512,  0,   10691, 365,  546888,185232,
0
128 Bucket:1024,  0,3563, 905,  348419,   0,   
0

256 Bucket:2048,  0,2872, 162,  249995,59834,
0
vmem btag:   56,  0,  192811,   51500,  502264,1723,   
0



Adam Vande More gave other suggestions on that thread from 2011:

  
http://lists.freebsd.org/pipermail/freebsd-emulation/2011-July/008971.html



Mark
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 15:26, Mark Martinec wrote:

2014-07-24 21:57, Larry Rosenman wrote:

Sending zroot/home@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/home.

Write failed: Cannot allocate memory

  

cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different...


Sounds familiar, check my posting of today and links therein:
  
http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html



I'm not using netgraph to the best of my knowledge


Check to make sure:
  vmstat -z | fgrep NetGraph


and the only fails on the SENDING host are:
8 Bucket:64,  0,  41,3555,  257774,  11,   
0
12 Bucket:   96,  0,  96,2569,  123653,   0,   
0
16 Bucket:  128,  0,   17195, 506,  215573,   0,   
0
32 Bucket:  256,  0, 340,4670,  900638,  50,   
0

64 Bucket:  512,  0,   10691, 365,  546888,185232,
0
128 Bucket:1024,  0,3563, 905,  348419,   0,   
0

256 Bucket:2048,  0,2872, 162,  249995,59834,
0
vmem btag:   56,  0,  192811,   51500,  502264,1723,   
0



Adam Vande More gave other suggestions on that thread from 2011:

  
http://lists.freebsd.org/pipermail/freebsd-emulation/2011-July/008971.html



Mark



thebighonker.lerctr.org /home/ler $ vmstat -z|fgrep NetGraph
thebighonker.lerctr.org /home/ler $

This on physical hardware, FWIW.

thebighonker.lerctr.org /home/ler $ vmstat -z
ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP

UMA Kegs:   384,  0, 202,   8, 202,   0,   0
UMA Zones: 1664,  0, 202,   0, 202,   0,   0
UMA Slabs:   80,  0,  361152,   38448, 1561588,   0,   0
UMA RCntSlabs:   88,  0,4360,   5,4360,   0,   0
UMA Hash:   256,  0,   8,  82,  82,   0,   0
4 Bucket:32,  0,4816,   33684, 2974043,   0,   0
6 Bucket:48,  0,6118,2514,  624434,   0,   0
8 Bucket:64,  0, 161,3435,  265443,  11,   0
12 Bucket:   96,  0, 177,2488,  130226,   0,   0
16 Bucket:  128,  0,   20301, 531,  224788,   0,   0
32 Bucket:  256,  0, 433,4577,  945047,  50,   0
64 Bucket:  512,  0,   11137, 575,  579209,211292,   
0

128 Bucket:1024,  0,3509, 959,  369839,   0,   0
256 Bucket:2048,  0,2847, 187,  269044,59834,   
0

vmem btag:   56,  0,  209393,   34918,  557576,1723,   0
VM OBJECT:  256,  0,   91384, 791, 3480613,   0,   0
RADIX NODE: 144,  0,  227133,   37683,16029821,   0,   0
MAP:240,  0,   3,  61,   3,   0,   0
KMAP ENTRY: 128,  0,  10, 269,  10,   0,   0
MAP ENTRY:  128,  0,   12666,   23604, 9357932,   0,   0
VMSPACE:448,  0, 108, 306,   75357,   0,   0
fakepg: 104,  0,   0,   0,   0,   0,   0
mt_zone:   4112,  0, 368,   0, 368,   0,   0
16:  16,  0,  267839,2488,45160196,   0,   0
32:  32,  0,  148139,1486,39174301,   0,   0
64:  64,  0,  291630,  504078,40379996,   0,   0
128:128,  0,  182409,   35490,63455920,   0,   0
256:256,  0,   89368,  230912,35841036,   0,   0
512:512,  0, 741,3987,24215854,   0,   0
1024:  1024,  0,1371,1385, 1726440,   0,   0
2048:  2048,  0, 277, 417,10240207,   0,   0
4096:  4096,  0,   38306,   17186,  913638,   0,   0
SLEEPQUEUE:  80,  0, 613, 658, 613,   0,   0
64 pcpu:  8,  0,1884,1188,1884,   0,   0
Files:   80,  0, 829,1571, 4279893,   0,   0
rl_entry:40,  0, 209,1891, 209,   0,   0
TURNSTILE:  136,  0, 613, 307, 613,   0,   0
umtx pi: 96,  0,   0,   0,   0,   0,   0
MAC labels:  40,  0,   0,   0,   0,   0,   0
PROC:  1208,  0, 131, 121,   76219,   0,   0
THREAD:1168,  0, 519,  93,2039,   0,   0
cpuset:  72,  0, 270, 280, 426,   0,   0
cyclic_id_cache: 64,  0,   0,   0,   0,   0,   0
audit_record:  1248,  0,   0,   0,   0,   0,   0
mbuf_packet:256, 6519810,4081,3256, 4910227,   0,   
0
mbuf:   

Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 15:44, Larry Rosenman wrote:

On 2014-07-24 15:26, Mark Martinec wrote:

2014-07-24 21:57, Larry Rosenman wrote:

Sending zroot/home@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot/home.

Write failed: Cannot allocate memory

  

cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different...


Sounds familiar, check my posting of today and links therein:
  
http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html



I'm not using netgraph to the best of my knowledge


Check to make sure:
  vmstat -z | fgrep NetGraph


and the only fails on the SENDING host are:
8 Bucket:64,  0,  41,3555,  257774,  11,  
 0
12 Bucket:   96,  0,  96,2569,  123653,   0,  
 0
16 Bucket:  128,  0,   17195, 506,  215573,   0,  
 0
32 Bucket:  256,  0, 340,4670,  900638,  50,  
 0
64 Bucket:  512,  0,   10691, 365,  
546888,185232,

0
128 Bucket:1024,  0,3563, 905,  348419,   0,  
 0

256 Bucket:2048,  0,2872, 162,  249995,59834,
0
vmem btag:   56,  0,  192811,   51500,  502264,1723,  
 0



Adam Vande More gave other suggestions on that thread from 2011:

  
http://lists.freebsd.org/pipermail/freebsd-emulation/2011-July/008971.html



Mark



thebighonker.lerctr.org /home/ler $ vmstat -z|fgrep NetGraph
thebighonker.lerctr.org /home/ler $

This on physical hardware, FWIW.

thebighonker.lerctr.org /home/ler $ vmstat -z
ITEM   SIZE  LIMIT USED FREE  REQ FAIL 
SLEEP


UMA Kegs:   384,  0, 202,   8, 202,   0,   
0
UMA Zones: 1664,  0, 202,   0, 202,   0,   
0
UMA Slabs:   80,  0,  361152,   38448, 1561588,   0,   
0
UMA RCntSlabs:   88,  0,4360,   5,4360,   0,   
0
UMA Hash:   256,  0,   8,  82,  82,   0,   
0
4 Bucket:32,  0,4816,   33684, 2974043,   0,   
0
6 Bucket:48,  0,6118,2514,  624434,   0,   
0
8 Bucket:64,  0, 161,3435,  265443,  11,   
0
12 Bucket:   96,  0, 177,2488,  130226,   0,   
0
16 Bucket:  128,  0,   20301, 531,  224788,   0,   
0
32 Bucket:  256,  0, 433,4577,  945047,  50,   
0

64 Bucket:  512,  0,   11137, 575,  579209,211292,
0
128 Bucket:1024,  0,3509, 959,  369839,   0,   
0

256 Bucket:2048,  0,2847, 187,  269044,59834,
0
vmem btag:   56,  0,  209393,   34918,  557576,1723,   
0
VM OBJECT:  256,  0,   91384, 791, 3480613,   0,   
0
RADIX NODE: 144,  0,  227133,   37683,16029821,   0,   
0
MAP:240,  0,   3,  61,   3,   0,   
0
KMAP ENTRY: 128,  0,  10, 269,  10,   0,   
0
MAP ENTRY:  128,  0,   12666,   23604, 9357932,   0,   
0
VMSPACE:448,  0, 108, 306,   75357,   0,   
0
fakepg: 104,  0,   0,   0,   0,   0,   
0
mt_zone:   4112,  0, 368,   0, 368,   0,   
0
16:  16,  0,  267839,2488,45160196,   0,   
0
32:  32,  0,  148139,1486,39174301,   0,   
0
64:  64,  0,  291630,  504078,40379996,   0,   
0
128:128,  0,  182409,   35490,63455920,   0,   
0
256:256,  0,   89368,  230912,35841036,   0,   
0
512:512,  0, 741,3987,24215854,   0,   
0
1024:  1024,  0,1371,1385, 1726440,   0,   
0
2048:  2048,  0, 277, 417,10240207,   0,   
0
4096:  4096,  0,   38306,   17186,  913638,   0,   
0
SLEEPQUEUE:  80,  0, 613, 658, 613,   0,   
0
64 pcpu:  8,  0,1884,1188,1884,   0,   
0
Files:   80,  0, 829,1571, 4279893,   0,   
0
rl_entry:40,  0, 209,1891, 209,   0,   
0
TURNSTILE:  136,  0, 613, 307, 613,   0,   
0
umtx pi: 96,  0,   0,   0,   0,   0,   
0
MAC labels:  40,  0,   0,   0,   0,   0,   
0
PROC:  1208,  0, 131, 121,   76219,   0,   
0
THREAD:1168,  0, 519,  93,2039,   0,   
0
cpuset:  72,  0, 270, 280, 426,   0,   
0
cyclic_id_cache: 64,  0,   0,   0,   0,   0,   
0
audit_record:  1248,  0,   0,   0,   0,   0,   
0


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Allan Jude
On 2014-07-24 16:11, Larry Rosenman wrote:
 On 2014-07-24 15:07, Allan Jude wrote:
 On 2014-07-24 15:57, Larry Rosenman wrote:
 On 2014-07-24 14:53, Mark Martinec wrote:
 2014-07-24 21:31, Larry Rosenman wrote:
 borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O
 r...@tbh.lerctr.org -R zroot  zroot/backups/TBH
 Creating recursive snapshot zroot@zxfer_26699_20140724135840.
 Checking grandfather status of all snapshots marked for deletion...
 Grandfather check passed.
 Sending zroot@zxfer_26699_20140724135840 to zroot/backups/TBH/zroot.
 Sending zroot/ROOT@zxfer_26699_20140724135840 to
 zroot/backups/TBH/zroot/ROOT.
 Sending zroot/ROOT/default@zxfer_23699_20140724134435 to
 zroot/backups/TBH/zroot/ROOT/default.
 Sending zroot/ROOT/default@zxfer_26699_20140724135840 to
 zroot/backups/TBH/zroot/ROOT/default.
   (incremental to zroot/ROOT/default@zxfer_23699_20140724134435.)
 Sending zroot/home@zxfer_26699_20140724135840 to
 zroot/backups/TBH/zroot/home.

 Write failed: Cannot allocate memory
   

 cannot receive new filesystem stream: invalid backup stream
 Error when zfs send/receiving.
 borg.lerctr.org /home/ler #

 well that's different...

 Sounds familiar, check my posting of today and links therein:

   http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html

 Mark
 I'm not using netgraph to the best of my knowledge
 and the only fails on the SENDING host are:
 8 Bucket:64,  0,  41,3555,  257774,  11,   0
 12 Bucket:   96,  0,  96,2569,  123653,   0,   0
 16 Bucket:  128,  0,   17195, 506,  215573,   0,   0
 32 Bucket:  256,  0, 340,4670,  900638,  50,   0
 64 Bucket:  512,  0,   10691, 365, 
 546888,185232,   0
 128 Bucket:1024,  0,3563, 905,  348419,   0,   0
 256 Bucket:2048,  0,2872, 162, 
 249995,59834,   0
 vmem btag:   56,  0,  192811,   51500,  502264,1723,   0



 I regularly use zxfer to transfer 500+ GiB datasets over the internet.
 This week I actually replicated a 2.1 TiB dataset with zxfer without
 issue.

 I wonder which thing is running out of memory. Is there a delay while it
 is 'running out of memory', or does it fail immediately? Does running
 top while it is working on running out of memory reveal anything?

 I would expect to use up a lot of memory while doing deduplication, but
 not otherwise.

 Note: I most often use openssh-portable rather than base ssh for
 replication, as I enable the nonecipher to reduce CPU usage, and adjust
 the TcpRcvBuf upwards to actually saturate a gigabit over the internet.
 
 I wasn't watching exactly what it was doing, but the sending box has 16G
 and 18G Swap and swap
 has NOT been touched.
 
 last pid: 74288;  load averages:  4.70,  5.61,  5.91up 1+03:14:18 
 15:10:44
 115 processes: 3 running, 112 sleeping
 CPU:  0.6% user, 33.3% nice,  0.6% system,  0.1% interrupt, 65.4% idle
 Mem: 847M Active, 761M Inact, 14G Wired, 4616K Cache, 357M Free
 ARC: 12G Total, 6028M MFU, 5281M MRU, 3152K Anon, 120M Header, 688M Other
 Swap: 18G Total, 18G Free
 
 so I have zero idea where to go here.
 
 

Most ZFS memory usage is 'wired' and so cannot be swapped, so lack of
swap activity isn't a good indicator.

-- 
Allan Jude



signature.asc
Description: OpenPGP digital signature


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 18:56, Allan Jude wrote:

On 2014-07-24 16:11, Larry Rosenman wrote:

On 2014-07-24 15:07, Allan Jude wrote:

On 2014-07-24 15:57, Larry Rosenman wrote:

On 2014-07-24 14:53, Mark Martinec wrote:

2014-07-24 21:31, Larry Rosenman wrote:

borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O
r...@tbh.lerctr.org -R zroot  zroot/backups/TBH
Creating recursive snapshot zroot@zxfer_26699_20140724135840.
Checking grandfather status of all snapshots marked for 
deletion...

Grandfather check passed.
Sending zroot@zxfer_26699_20140724135840 to 
zroot/backups/TBH/zroot.

Sending zroot/ROOT@zxfer_26699_20140724135840 to
zroot/backups/TBH/zroot/ROOT.
Sending zroot/ROOT/default@zxfer_23699_20140724134435 to
zroot/backups/TBH/zroot/ROOT/default.
Sending zroot/ROOT/default@zxfer_26699_20140724135840 to
zroot/backups/TBH/zroot/ROOT/default.
  (incremental to zroot/ROOT/default@zxfer_23699_20140724134435.)
Sending zroot/home@zxfer_26699_20140724135840 to
zroot/backups/TBH/zroot/home.



Write failed: Cannot allocate memory

  


cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different...


Sounds familiar, check my posting of today and links therein:

  
http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html


Mark

I'm not using netgraph to the best of my knowledge
and the only fails on the SENDING host are:
8 Bucket:64,  0,  41,3555,  257774,  11, 
  0
12 Bucket:   96,  0,  96,2569,  123653,   0, 
  0
16 Bucket:  128,  0,   17195, 506,  215573,   0, 
  0
32 Bucket:  256,  0, 340,4670,  900638,  50, 
  0

64 Bucket:  512,  0,   10691, 365,
546888,185232,   0
128 Bucket:1024,  0,3563, 905,  348419,   0, 
  0

256 Bucket:2048,  0,2872, 162,
249995,59834,   0
vmem btag:   56,  0,  192811,   51500,  502264,1723, 
  0





I regularly use zxfer to transfer 500+ GiB datasets over the 
internet.

This week I actually replicated a 2.1 TiB dataset with zxfer without
issue.

I wonder which thing is running out of memory. Is there a delay while 
it

is 'running out of memory', or does it fail immediately? Does running
top while it is working on running out of memory reveal anything?

I would expect to use up a lot of memory while doing deduplication, 
but

not otherwise.

Note: I most often use openssh-portable rather than base ssh for
replication, as I enable the nonecipher to reduce CPU usage, and 
adjust
the TcpRcvBuf upwards to actually saturate a gigabit over the 
internet.


I wasn't watching exactly what it was doing, but the sending box has 
16G

and 18G Swap and swap
has NOT been touched.

last pid: 74288;  load averages:  4.70,  5.61,  5.91up 1+03:14:18
15:10:44
115 processes: 3 running, 112 sleeping
CPU:  0.6% user, 33.3% nice,  0.6% system,  0.1% interrupt, 65.4% idle
Mem: 847M Active, 761M Inact, 14G Wired, 4616K Cache, 357M Free
ARC: 12G Total, 6028M MFU, 5281M MRU, 3152K Anon, 120M Header, 688M 
Other

Swap: 18G Total, 18G Free

so I have zero idea where to go here.




Most ZFS memory usage is 'wired' and so cannot be swapped, so lack of
swap activity isn't a good indicator.
I would expect ZFS to give up ARC when it needed memory and couldn't get 
it


I also am running Karl Denninger's Arc patch that makes the arc MUCH 
more responsive to

freeing ARC when the system needs memory.

--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Mark Martinec

2014-07-25 01:36 Larry Rosenman wrote:


#!/bin/sh
DATE=`date +%Y-%m-%d`
#DATE2=2013-03-24
#DATE2=`date -v -1d +%Y-%m-%d`
# snap the source
ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
# zfs copy the source to here.
ssh r...@tbh.lerctr.org zfs send  -v -R zroot@${DATE} | \
 ssh home.lerctr.org \zfs recv -F -u -v -d zroot/backups/TBH2\


Btw, this double-ssh looks awkward, why not just:

  ssh r...@tbh.lerctr.org zfs send ... | zfs recv ...

or better yet:

  ssh r...@tbh.lerctr.org zfs send ... | mbuffer -m 16M | zfs recv ...

(The misc/mbuffer compensates for bursty zfs reads and writes.
 A note to myself: I should suggest to Allan to add mbuffer
 in a pipe as used in sysutils/zxfer, instead of patching zxfer
 for our local use :)

Mark
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Allan Jude
On 2014-07-24 20:46, Mark Martinec wrote:
 2014-07-25 01:36 Larry Rosenman wrote:
 
 #!/bin/sh
 DATE=`date +%Y-%m-%d`
 #DATE2=2013-03-24
 #DATE2=`date -v -1d +%Y-%m-%d`
 # snap the source
 ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
 # zfs copy the source to here.
 ssh r...@tbh.lerctr.org zfs send  -v -R zroot@${DATE} | \
  ssh home.lerctr.org \zfs recv -F -u -v -d zroot/backups/TBH2\
 
 Btw, this double-ssh looks awkward, why not just:
 
   ssh r...@tbh.lerctr.org zfs send ... | zfs recv ...
 
 or better yet:
 
   ssh r...@tbh.lerctr.org zfs send ... | mbuffer -m 16M | zfs recv ...
 
 (The misc/mbuffer compensates for bursty zfs reads and writes.
  A note to myself: I should suggest to Allan to add mbuffer
  in a pipe as used in sysutils/zxfer, instead of patching zxfer
  for our local use :)
 
 Mark
 ___
 freebsd-current@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-current
 To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org

zxfer can already do this, with the -D option
I actually use misc/clpbar and get a progress bar as well

-D 'bar -s %%size%% -bl 1m -bs 128m'

or in your case: -D 'mbuffer -m 16M'


-- 
Allan Jude



signature.asc
Description: OpenPGP digital signature


Re: Future of pf / firewall in FreeBSD ? - does it have one ?

2014-07-24 Thread Peter Wemm
On Wednesday 23 July 2014 20:59:19 Bjoern A. Zeeb wrote:
 On 23 Jul 2014, at 20:41 , Allan Jude allanj...@freebsd.org wrote:
  On 2014-07-23 16:38, Bjoern A. Zeeb wrote:
  On 23 Jul 2014, at 15:42 , Cy Schubert cy.schub...@komquats.com wrote:
  Taking this discussion slightly sideways but touching on this thread a
  little, each of our packet filters will need nat66 support too. Pf
  doesn't
  support it for sure. I've been told that ipfw may and I suspect ipfilter
  doesn't as it was on Darren's todo list from 2009.
  
  our pf does support IPv6 prefix rewriting quite nicely and has for years.
  
  Bjoern: What IPv6 stuff does our pf not do well?
 
 I think the most pressing, as Peter said, is fragment handling, though a
 good fraction of major content providers seems to do mss clamping to a min
 IPv6 mtu on IPv6 and drop fragments at the edge (not much different to
 IPv4, which makes you wonder?).Whoever is clever will think of how many
 different queueing and fragment handling implementations we need in the
 kernel, and how often we have to do it on an end node that might also run a
 firewall,  pick one we have, turn it into a library thing, apply it to all
 places, and then add the latest IETF suggestions on top of it.

Correct.

There is code in the openbsd cvs history where they added it while the 
internal APIs looked similar enough to ours.  It's simpler than ipv4 
reassembly - taking advantage of things like overlapping fragments not being 
allowed.

I'm almost desperate enough to take a shot at it myself, but mbufs and I do 
not get along.  Nobody wants code I've touched to be in the tree if mbufs are 
involved.


The initial commits.. first the supporting changes:

(refactor code for reuse)
http://openbsd.cs.toronto.edu/cgi-bin/cvsweb/src/sys/net/pf_norm.c.diff?r1=1.128r2=1.129

(add ipv6 defrag/refrag)
http://openbsd.cs.toronto.edu/cgi-bin/cvsweb/src/sys/net/pf_norm.c.diff?r1=1.129r2=1.130

Then they added the code to defragment/refragment:
(pf_test6 defrag/refrag)
http://openbsd.cs.toronto.edu/cgi-bin/cvsweb/src/sys/net/pf.c.diff?r1=1.729r2=1.730


The catch is that they fixed a lot of edge cases so one needs to follow the 
history forward a bit to make sure it it's covered.  The other problem is our 
codebase is even older than when this was added so some looking at older 
commits is required.

In the time since the feature was added, they have refactored it a few times 
and merged the two code paths for ipv4 and ipv6.  It bears no resemblance to 
what we have in our tree.


The killer reason why this is a problem that needs to be solved.. IPv6 + 
DNSSEC exercises this code a lot.

Performance isn't a factor - it's basic functionality that's at stake.

-- 
Peter Wemm - pe...@wemm.org; pe...@freebsd.org; pe...@yahoo-inc.com; KI6FJV
UTF-8: for when a ' or ... just won\342\200\231t do\342\200\246

signature.asc
Description: This is a digitally signed message part.


Re: zfs send/recv: STILL invalid Backup Stream

2014-07-24 Thread Larry Rosenman

On 2014-07-24 19:56, Allan Jude wrote:

On 2014-07-24 20:46, Mark Martinec wrote:

2014-07-25 01:36 Larry Rosenman wrote:


#!/bin/sh
DATE=`date +%Y-%m-%d`
#DATE2=2013-03-24
#DATE2=`date -v -1d +%Y-%m-%d`
# snap the source
ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
# zfs copy the source to here.
ssh r...@tbh.lerctr.org zfs send  -v -R zroot@${DATE} | \
 ssh home.lerctr.org \zfs recv -F -u -v -d zroot/backups/TBH2\


Btw, this double-ssh looks awkward, why not just:

  ssh r...@tbh.lerctr.org zfs send ... | zfs recv ...

or better yet:

  ssh r...@tbh.lerctr.org zfs send ... | mbuffer -m 16M | zfs recv 
...


(The misc/mbuffer compensates for bursty zfs reads and writes.
 A note to myself: I should suggest to Allan to add mbuffer
 in a pipe as used in sysutils/zxfer, instead of patching zxfer
 for our local use :)

Mark
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to 
freebsd-current-unsubscr...@freebsd.org


zxfer can already do this, with the -D option
I actually use misc/clpbar and get a progress bar as well

-D 'bar -s %%size%% -bl 1m -bs 128m'

or in your case: -D 'mbuffer -m 16M'



Ok, I did the mbuffer trick, and the SEND side is where the memory issue 
is:

borg.lerctr.org /home/ler/bin $ tail zfs-send.log
23:28:12   15.7G   zroot/home@2014-07-24_22:56
23:28:13   15.7G   zroot/home@2014-07-24_22:56
23:28:14   15.7G   zroot/home@2014-07-24_22:56
23:28:15   15.7G   zroot/home@2014-07-24_22:56
23:28:16   15.7G   zroot/home@2014-07-24_22:56
23:28:17   15.7G   zroot/home@2014-07-24_22:56
23:28:18   15.7G   zroot/home@2014-07-24_22:56
23:28:19   15.7G   zroot/home@2014-07-24_22:56
23:28:20   15.8G   zroot/home@2014-07-24_22:56
Write failed: Cannot allocate memory
borg.lerctr.org /home/ler/bin $

borg.lerctr.org /home/ler/bin $ tail zfs-recv.log
cannot receive new filesystem stream: invalid backup stream
borg.lerctr.org /home/ler/bin $

borg.lerctr.org /home/ler/bin $  cat backup-TBH-ZFS-initial.sh
#!/bin/sh
DATE=`date +%Y-%m-%d_%H:%M`
#DATE2=2013-03-24
#DATE2=`date -v -1d +%Y-%m-%d`
# snap the source
ssh r...@tbh.lerctr.org zfs snapshot -r zroot@${DATE}
# zfs copy the source to here.
ssh r...@tbh.lerctr.org 2zfs-send.log zfs send  -v -R zroot@${DATE}  
| \

 mbuffer -m 16M 2mbuffer.log | \
 zfs recv -F -u -v -d zroot/backups/TBH3 2zfs-recv.log
# make sure we NEVER allow the backup stuff to automount.
/sbin/zfs list -H -t filesystem -r zroot/backups/TBH3| \
awk '{printf /sbin/zfs set canmount=noauto %s\n,$1}' | sh
borg.lerctr.org /home/ler/bin $

borg.lerctr.org /home/ler/bin $ ssh tbh vmstat -z
ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP

UMA Kegs:   384,  0, 202,   8, 202,   0,   0
UMA Zones: 1664,  0, 202,   0, 202,   0,   0
UMA Slabs:   80,  0,  363320,   44080, 2348572,   0,   0
UMA RCntSlabs:   88,  0,4484,  16,4484,   0,   0
UMA Hash:   256,  0,   7,  83,  82,   0,   0
4 Bucket:32,  0,1911,   36589, 5255345,   0,   0
6 Bucket:48,  0,9406,3542,  878903,   0,   0
8 Bucket:64,  0,  42,3554,  298443,  11,   0
12 Bucket:   96,  0,  93,2572,  166067,   0,   0
16 Bucket:  128,  0,   30447, 987,  301403,   0,   0
32 Bucket:  256,  0, 352,4658, 1157489,  50,   0
64 Bucket:  512,  0,   13669, 995, 1113780,268080,   
0

128 Bucket:1024,  0,3646, 822,  524977,   0,   0
256 Bucket:2048,  0,3648, 114,  482627,59834,   
0

vmem btag:   56,  0,  208448,   49779,  758362,1821,   0
VM OBJECT:  256,  0,   98960,1570, 4440323,   0,   0
RADIX NODE: 144,  0,  235166,   29650,22669417,   0,   0
MAP:240,  0,   3,  61,   3,   0,   0
KMAP ENTRY: 128,  0,  10, 269,  10,   0,   0
MAP ENTRY:  128,  0,   11828,   24442,12463199,   0,   0
VMSPACE:448,  0, 103, 311,   96786,   0,   0
fakepg: 104,  0,   0,   0,   0,   0,   0
mt_zone:   4112,  0, 368,   0, 368,   0,   0
16:  16,  0,  264961,9382,56032463,   0,   0
32:  32,  0,  155626,2874,55177821,   0,   0
64:  64,  0,  123597,  672111,53838666,   0,   0
128:128,  0,  159107,   58792,82084329,   0,   0
256:256,  0,   97004,  223276,48661927,   0,   0
512:512,  0, 737,3991,33323191,   0,   0
1024:  1024,  0,1367,1389, 2330023,   0,   0
2048:  

Re: r268621: panic: shadowed tmpfs v_object [with dump]

2014-07-24 Thread Sergey V. Dyatko
On Wed, 23 Jul 2014 22:56:46 +0200
Mattia Rossi mattia.rossi.m...@gmail.com wrote: 

 Got the same panic, is this fix getting committed? Or has it already 
 been committed?

r269053

 
 Mat
 
 On 23/07/14 18:12, Bryan Drewery wrote:
  On 7/23/14, 7:11 AM, Konstantin Belousov wrote:
  On Tue, Jul 22, 2014 at 02:53:56PM -0700, Bryan Drewery wrote:
  On 7/22/14, 2:26 PM, Bryan Drewery wrote:
  On 7/22/14, 2:07 PM, Bryan Drewery wrote:
  Meant to send to current@, moving there.
 
  On 7/22/14, 2:07 PM, Bryan Drewery wrote:
  On r268621:
 
  panic: shadowed tmpfs v_object 0xf807a7f96600
  cpuid = 0
  KDB: stack backtrace:
  db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
  0xfe1247d67390
  kdb_backtrace() at kdb_backtrace+0x39/frame 0xfe1247d67440
  vpanic() at vpanic+0x126/frame 0xfe1247d67480
  kassert_panic() at kassert_panic+0x139/frame 0xfe1247d674f0
  vm_object_deallocate() at vm_object_deallocate+0x236/frame
  0xfe1247d67550
  tmpfs_free_node() at tmpfs_free_node+0x138/frame 0xfe1247d67580
  tmpfs_reclaim() at tmpfs_reclaim+0x17d/frame 0xfe1247d675c0
  VOP_RECLAIM_APV() at VOP_RECLAIM_APV+0xf7/frame 0xfe1247d675f0
  vgonel() at vgonel+0x1a1/frame 0xfe1247d67660
  vrecycle() at vrecycle+0x3e/frame 0xfe1247d67690
  tmpfs_inactive() at tmpfs_inactive+0x4c/frame 0xfe1247d676b0
  VOP_INACTIVE_APV() at VOP_INACTIVE_APV+0xf7/frame 
  0xfe1247d676e0
  vinactive() at vinactive+0xc6/frame 0xfe1247d67730
  vputx() at vputx+0x27a/frame 0xfe1247d67790
  tmpfs_rename() at tmpfs_rename+0xf5/frame 0xfe1247d67860
  VOP_RENAME_APV() at VOP_RENAME_APV+0xfc/frame 0xfe1247d67890
  kern_renameat() at kern_renameat+0x3ef/frame 0xfe1247d67ae0
  amd64_syscall() at amd64_syscall+0x25a/frame 0xfe1247d67bf0
  Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfe1247d67bf0
  --- syscall (128, FreeBSD ELF64, sys_rename), rip = 0x80088b74a, 
  rsp =
  0x7fffe238, rbp = 0x7fffe710 ---
  Uptime: 6d4h0m3s
 
  Dump failed. Partition too small.
 
  Unfortunately I have no dump to debug.
 
 
  Running poudriere again after boot hit the issue right away:
 
 
  (kgdb) bt
  #0  doadump (textdump=1) at pcpu.h:219
  #1  0x809122a7 in kern_reboot (howto=260) at
  /usr/src/sys/kern/kern_shutdown.c:445
  #2  0x809127e5 in vpanic (fmt=value optimized out, 
  ap=value
  optimized out) at /usr/src/sys/kern/kern_shutdown.c:744
  #3  0x80912679 in kassert_panic (fmt=value optimized 
  out) at
  /usr/src/sys/kern/kern_shutdown.c:632
  #4  0x80ba7996 in vm_object_deallocate (object=value
  optimized out) at /usr/src/sys/vm/vm_object.c:562
  #5  0x820a75a8 in tmpfs_free_node (tmp=0xf800b5155980,
  node=0xf802716ba740) at
  /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_subr.c:335
  #6  0x820a363d in tmpfs_reclaim (v=value optimized out) at
  /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_vnops.c:1276
  #7  0x80e48717 in VOP_RECLAIM_APV (vop=value optimized out,
  a=value optimized out) at vnode_if.c:2017
  #8  0x809c1381 in vgonel (vp=0xf802716b61d8) at
  vnode_if.h:830
  #9  0x809c18be in vrecycle (vp=0xf802716b61d8) at
  /usr/src/sys/kern/vfs_subr.c:2655
  #10 0x820a61cc in tmpfs_inactive (v=value optimized out) at
  /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_vnops.c:1242
  #11 0x80e485b7 in VOP_INACTIVE_APV (vop=value optimized 
  out,
  a=value optimized out) at vnode_if.c:1951
  #12 0x809bfd36 in vinactive (vp=0xf802716b61d8,
  td=0xf80187e29920) at vnode_if.h:807
  #13 0x809c012a in vputx (vp=0xf802716b61d8, func=2) at
  /usr/src/sys/kern/vfs_subr.c:2267
  #14 0x820a47c5 in tmpfs_rename (v=value optimized out) at
  /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_vnops.c:1023
  #15 0x80e47d3c in VOP_RENAME_APV (vop=value optimized out,
  a=value optimized out) at vnode_if.c:1544
  #16 0x809cc77f in kern_renameat (td=value optimized out,
  oldfd=value optimized out, old=value optimized out, newfd=value
  optimized out, new=value optimized out,
   pathseg=value optimized out) at vnode_if.h:636
  #17 0x80d280fa in amd64_syscall (td=0xf80187e29920,
  traced=0) at subr_syscall.c:133
  #18 0x80d0a64b in Xfast_syscall () at
  /usr/src/sys/amd64/amd64/exception.S:407
  (kgdb) p *(vm_object_t)0xf8027169f500
  $1 = {lock = {lock_object = {lo_name = 0x80fe89f6 vm 
  object,
  lo_flags = 90374144, lo_data = 0, lo_witness = 0xfe6e7680},
  rw_lock = 18446735284191271200}, object_list = {
   tqe_next = 0xf8027169f400, tqe_prev = 0xf8027169f620},
  shadow_head = {lh_first = 0xf801b8489e00}, shadow_list = {le_next
  = 0x0, le_prev = 0x0}, memq = {tqh_first = 0xf811d966bc08,
   tqh_last = 0xf811d966bc18}, rtree = {rt_root =
  18446735354278362121, rt_flags = 0 '\0'}, size = 1, generation = 1,
  ref_count = 1, shadow_count = 1, memattr = 6 

Re: [ANNOUNCEMENT] pkg 1.3.0 out!

2014-07-24 Thread dt71

Baptiste Daroussin wrote, On 07/23/2014 16:42:

So much has happened that it is hard to summarize so I'll try to highlight the
major points:
- New solver, now pkg has a real SAT solver able to automatically handle
   conflicts and dynamically discover them. (yes pkg set -o is deprecated now)


Does pkg/Pkg/PKG/pkgng/PkgNg/PKGNG/whatever now downgrade/revert packages when 
removing an alternative repository, such as FreeBSD_new_xorg? (Previously, it 
didn't: I was required to manually remove and (re)install all X11-related 
packages.)
___
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to freebsd-current-unsubscr...@freebsd.org