On 2014-07-24 18:56, Allan Jude wrote:
On 2014-07-24 16:11, Larry Rosenman wrote:
On 2014-07-24 15:07, Allan Jude wrote:
On 2014-07-24 15:57, Larry Rosenman wrote:
On 2014-07-24 14:53, Mark Martinec wrote:
2014-07-24 21:31, Larry Rosenman wrote:
borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O
r...@tbh.lerctr.org -R zroot  zroot/backups/TBH
Creating recursive snapshot zroot@zxfer_26699_20140724135840.
Checking grandfather status of all snapshots marked for deletion...
Grandfather check passed.
Sending zroot@zxfer_26699_20140724135840 to zroot/backups/TBH/zroot.
Sending zroot/ROOT@zxfer_26699_20140724135840 to
Sending zroot/ROOT/default@zxfer_23699_20140724134435 to
Sending zroot/ROOT/default@zxfer_26699_20140724135840 to
  (incremental to zroot/ROOT/default@zxfer_23699_20140724134435.)
Sending zroot/home@zxfer_26699_20140724135840 to

Write failed: Cannot allocate memory

cannot receive new filesystem stream: invalid backup stream
Error when zfs send/receiving.
borg.lerctr.org /home/ler #

well that's different.......

Sounds familiar, check my posting of today and links therein:


I'm not using netgraph to the best of my knowledge....
and the only fails on the SENDING host are:
8 Bucket: 64, 0, 41, 3555, 257774, 11, 0 12 Bucket: 96, 0, 96, 2569, 123653, 0, 0 16 Bucket: 128, 0, 17195, 506, 215573, 0, 0 32 Bucket: 256, 0, 340, 4670, 900638, 50, 0
64 Bucket:              512,      0,   10691,     365,
546888,185232,   0
128 Bucket: 1024, 0, 3563, 905, 348419, 0, 0
256 Bucket:            2048,      0,    2872,     162,
249995,59834,   0
vmem btag: 56, 0, 192811, 51500, 502264,1723, 0

I regularly use zxfer to transfer 500+ GiB datasets over the internet.
This week I actually replicated a 2.1 TiB dataset with zxfer without

I wonder which thing is running out of memory. Is there a delay while it
is 'running out of memory', or does it fail immediately? Does running
top while it is working on running out of memory reveal anything?

I would expect to use up a lot of memory while doing deduplication, but
not otherwise.

Note: I most often use openssh-portable rather than base ssh for
replication, as I enable the nonecipher to reduce CPU usage, and adjust the TcpRcvBuf upwards to actually saturate a gigabit over the internet.

I wasn't watching exactly what it was doing, but the sending box has 16G
and 18G Swap and swap
has NOT been touched.

last pid: 74288;  load averages:  4.70,  5.61,  5.91    up 1+03:14:18
115 processes: 3 running, 112 sleeping
CPU:  0.6% user, 33.3% nice,  0.6% system,  0.1% interrupt, 65.4% idle
Mem: 847M Active, 761M Inact, 14G Wired, 4616K Cache, 357M Free
ARC: 12G Total, 6028M MFU, 5281M MRU, 3152K Anon, 120M Header, 688M Other
Swap: 18G Total, 18G Free

so I have zero idea where to go here.

Most ZFS memory usage is 'wired' and so cannot be swapped, so lack of
swap activity isn't a good indicator.
I would expect ZFS to give up ARC when it needed memory and couldn't get it....

I also am running Karl Denninger's Arc patch that makes the arc MUCH more responsive to
freeing ARC when the system needs memory.

Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c)     E-Mail: l...@lerctr.org
US Mail: 108 Turvey Cove, Hutto, TX 78634-5688
freebsd-current@freebsd.org mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to