Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Trond Endrestøl
On Tue, 6 Mar 2018 08:40-0800, Rodney W. Grimes wrote:

> > On Mon, 5 Mar 2018 14:39-0600, Larry Rosenman wrote:
> > 
> > > Upgraded to:
> > > 
> > > FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 r330385: 
> > > Sun Mar  4 12:48:52 CST 2018 
> > > r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
> > > +1200060 1200060
> > > 
> > > Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP use 
> > > and swapping.
> > > 
> > > See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png
> > 
> > I see these symptoms on stable/11. One of my servers has 32 GiB of 
> > RAM. After a reboot all is well. ARC starts to fill up, and I still 
> > have more than half of the memory available for user processes.
> > 
> > After running the periodic jobs at night, the amount of wired memory 
> > goes sky high. /etc/periodic/weekly/310.locate is a particular nasty 
> > one.
> 
> I would like to find out if this is the same person I have
> reporting this problem from another source, or if this is
> a confirmation of a bug I was helping someone else with.
> 
> Have you been in contact with Michael Dexter about this
> issue, or any other forum/mailing list/etc?  

No, it wasn't me.

> If not then we have at least 2 reports of this unbound
> wired memory growth, if so hopefully someone here can
> take you further in the debug than we have been able
> to get.
> 
> > Limiting the ARC to, say, 16 GiB, has no effect of the high amount of 
> > wired memory. After a few more days, the kernel consumes virtually all 
> > memory, forcing processes in and out of the swap device.
> 
> Our experience as well.

-- 
Trond.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


best settings for usb2 and attached disks, and sdcards

2018-03-06 Thread John
Hi,
[cc'd to arm@ and fs@ where it's also relevant]

I have a number of rpi3 & rpi3 machines. Usually I want to attach a usb 
keydrive to them so that the sdcard isn't thrashed. They're all running 
-current. usr/src and usr/ports at least are mounted on the keydrive.

When initially updating eg the ports tree, svn will time out/crash because of 
the poor write performance of these devices in a rpi2/3 context. The fs on the 
usb keys is always ufs2. I have tried mounting these devices as -o async and 
also in fstab but this parameter seems not to 'take' in that mount doesn't 
report the async property set:

[...]
/dev/da0p2 on /ext (ufs, local, noatime, soft-updates)
[...]

was mounted with the command "mount -o async,noatime,rw /dev/da0s2 /ext"
but I can't tell if async is on or just ignored, no error message. And I still 
have to run svnlite cleanup /ext/ports until svnlite stops bailing out. When 
newfs was written, I passed -t  to it to enable trim, which seems to make a 
difference on this for deletes but not writes. Can anyone please suggest 
anything I can so to speed up disk i/o? And is async being applied or ignored?

thanks,
-- 
  John
  tech-li...@zyxst.net
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Danilo G. Baio
On Tue, Mar 06, 2018 at 01:36:45PM -0600, Larry Rosenman wrote:
> On Tue, Mar 06, 2018 at 10:16:36AM -0800, Rodney W. Grimes wrote:
> > > On Tue, Mar 06, 2018 at 08:40:10AM -0800, Rodney W. Grimes wrote:
> > > > > On Mon, 5 Mar 2018 14:39-0600, Larry Rosenman wrote:
> > > > > 
> > > > > > Upgraded to:
> > > > > > 
> > > > > > FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 
> > > > > > r330385: Sun Mar  4 12:48:52 CST 2018 
> > > > > > r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
> > > > > > +1200060 1200060
> > > > > > 
> > > > > > Yesterday, and I'm seeing really strange slowness, ARC use, and 
> > > > > > SWAP use and swapping.
> > > > > > 
> > > > > > See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png
> > > > > 
> > > > > I see these symptoms on stable/11. One of my servers has 32 GiB of 
> > > > > RAM. After a reboot all is well. ARC starts to fill up, and I still 
> > > > > have more than half of the memory available for user processes.
> > > > > 
> > > > > After running the periodic jobs at night, the amount of wired memory 
> > > > > goes sky high. /etc/periodic/weekly/310.locate is a particular nasty 
> > > > > one.
> > > > 
> > > > I would like to find out if this is the same person I have
> > > > reporting this problem from another source, or if this is
> > > > a confirmation of a bug I was helping someone else with.
> > > > 
> > > > Have you been in contact with Michael Dexter about this
> > > > issue, or any other forum/mailing list/etc?  
> > > Just IRC/Slack, with no response.
> > > > 
> > > > If not then we have at least 2 reports of this unbound
> > > > wired memory growth, if so hopefully someone here can
> > > > take you further in the debug than we have been able
> > > > to get.
> > > What can I provide?  The system is still in this state as the full backup 
> > > is slow.
> > 
> > One place to look is to see if this is the recently fixed:
> > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=88
> > g_bio leak.
> > 
> > vmstat -z | egrep 'ITEM|g_bio|UMA'
> > 
> > would be a good first look
> > 
> borg.lerctr.org /home/ler $ vmstat -z | egrep 'ITEM|g_bio|UMA'
> ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP
> UMA Kegs:   280,  0, 346,   5, 560,   0,   0
> UMA Zones: 1928,  0, 363,   1, 577,   0,   0
> UMA Slabs:  112,  0,25384098,  977762,102033225,   0,   0
> UMA Hash:   256,  0,  59,  16, 105,   0,   0
> g_bio:  384,  0,  33,1627,542482056,   0,   0
> borg.lerctr.org /home/ler $
> > > > > Limiting the ARC to, say, 16 GiB, has no effect of the high amount of 
> > > > > wired memory. After a few more days, the kernel consumes virtually 
> > > > > all 
> > > > > memory, forcing processes in and out of the swap device.
> > > > 
> > > > Our experience as well.
> > > > 
> > > > ...
> > > > 
> > > > Thanks,
> > > > Rod Grimes 
> > > > rgri...@freebsd.org
> > > Larry Rosenman http://www.lerctr.org/~ler
> > 
> > -- 
> > Rod Grimes 
> > rgri...@freebsd.org
> 
> -- 
> Larry Rosenman http://www.lerctr.org/~ler
> Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
> US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


Hi.

I noticed this behavior as well and changed vfs.zfs.arc_max for a smaller size.

For me it started when I upgraded to 1200058, in this box I'm only using
poudriere for building tests.

Regards.
-- 
Danilo G. Baio (dbaio)


signature.asc
Description: PGP signature


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Kurt Jaeger
Hi!

> > Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP
> > use and swapping.

I've seen it on 12, @r328899. 

> > Ideas?
[...]
> Hard-slappping vfs.zfs.arc_max down a ways mitigated it enough to get
> me through the days, but is a pretty gross hackaround...

That's what I did as well.

-- 
p...@opsec.eu+49 171 3101372 2 years to go !
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Possible race in buildworld, @r330236 -> @r330274

2018-03-06 Thread Bryan Drewery
On 3/2/2018 5:02 AM, David Wolfskill wrote:
> On the two machines where I track head daily, I had failures during
> "make buildworld" ... in different places on the two machines; re-trying
> the build succeeded (in each case).
> 
> Lately, I have (also) been tracking head on these machines with
> /etc/src.conf augmented with:
> 
> # For Lua loader experiments
> WITHOUT_FORTH=yes
> WITH_LOADER_LUA=yes
> 
> 
> (on a separate slice, so each machine is booted to that environment and
> an upgrade-in-place build is done).  Curiously, no failures were
> observed in the case where he Lua loader is being used & built.  (I am
> explicitly NOT claiming any causal relationship here.)
> 
> I have copied the build typescripts for each combination of machine
> and boot loader, together with the associated /etc/make.conf,
> src.conf, and src-env.conf for each to my Web server.  The build
> typescripts also have compressed copies -- the Web server is on the
> "consumer" end of a residential ADSL: it works, but it's not fast.
> 
> For the above-cited material, please see
> ; here's
> what's there:
> 
> albert(11.1-S)[9] ls -lhRT
> total 1
> drwxr-xr-x  4 david  staff 4B Mar  2 04:36:15 2018 Forth
> drwxr-xr-x  4 david  staff 4B Mar  2 04:36:15 2018 Lua
> 
> ./Forth:
> total 33
> drwxr-xr-x  2 david  staff 7B Mar  2 04:49:55 2018 freebeast
> drwxr-xr-x  2 david  staff 7B Mar  2 04:49:57 2018 laptop
> 
> ./Forth/freebeast:
> total 1609
> -rw-r--r--  1 david  staff   5.4M Mar  2 04:11:28 2018 build.txt
> -rw-r--r--  1 david  staff   351K Mar  2 04:11:28 2018 build.txt.bz2
> -r--r--r--  1 david  staff   108B Feb  4 10:01:28 2014 make.conf
> -r--r--r--  1 david  staff19B May 24 10:28:11 2017 src-env.conf
> -r--r--r--  1 david  staff48B Mar 22 04:35:38 2016 src.conf
> 
> ./Forth/laptop:
> total 1865
> -rw-r--r--  1 david  staff   6.1M Mar  2 04:11:29 2018 build.txt
> -rw-r--r--  1 david  staff   403K Mar  2 04:11:29 2018 build.txt.bz2
> -r--r--r--  1 david  staff   503B Jul 30 03:55:27 2017 make.conf
> -r--r--r--  1 david  staff19B May 30 04:15:48 2016 src-env.conf
> -r--r--r--  1 david  staff   197B Jul  8 02:58:12 2017 src.conf
> 
> ./Lua:
> total 33
> drwxr-xr-x  2 david  staff 7B Mar  2 04:49:59 2018 freebeast
> drwxr-xr-x  2 david  staff 7B Mar  2 04:50:00 2018 laptop
> 
> ./Lua/freebeast:
> total 1529
> -rw-r--r--  1 david  staff   5.1M Mar  2 04:37:38 2018 build.txt
> -rw-r--r--  1 david  staff   337K Mar  2 04:37:38 2018 build.txt.bz2
> -r--r--r--  1 david  staff   108B Feb  4 10:01:28 2014 make.conf
> -r--r--r--  1 david  staff19B May 24 10:28:11 2017 src-env.conf
> -r--r--r--  1 david  staff   115B Feb 13 05:39:36 2018 src.conf
> 
> ./Lua/laptop:
> total 1545
> -rw-r--r--  1 david  staff   5.1M Mar  2 04:28:53 2018 build.txt
> -rw-r--r--  1 david  staff   341K Mar  2 04:28:53 2018 build.txt.bz2
> -r--r--r--  1 david  staff   503B Jul 30 03:55:27 2017 make.conf
> -r--r--r--  1 david  staff19B May 30 04:15:48 2016 src-env.conf
> -r--r--r--  1 david  staff   264B Feb 13 06:09:49 2018 src.conf
> albert(11.1-S)[10] 
> 
> 
> (Files were copied with -p; I thought having timestamps that showed when
> they actually were last modified make be of some use.)
> 
> Peace,
> david
> 

http://www.catwhisker.org/~david/FreeBSD/head/race@r330274/Forth/freebeast/build.txt

> /common/S4/obj/usr/src/amd64.amd64/tmp/usr/bin/ld: error: unable to find 
> library -lgcc_s
> --- kerberos5/lib/libroken__L ---
> Building /common/S4/obj/usr/src/amd64.amd64/kerberos5/lib/libroken/vis.pico
> --- secure/lib/libcrypto__L ---
> cc: error: linker command failed with exit code 1 (use -v to see invocation)
> *** [libgost.so] Error code 1
> 
> make[6]: stopped in /usr/src/secure/lib/libcrypto/engines/libgost


Looks like the existing -lgcc_s race I know about already. It comes down
to tools/install.sh not supporting -S properly.

http://www.catwhisker.org/~david/FreeBSD/head/race@r330274/Forth/laptop/build.txt

No clue about this one though...

> Building 
> /common/S4/obj/usr/src/amd64.amd64/obj-lib32/kerberos5/lib/libgssapi_krb5/arcfour.po
> --- lib__L ---
> sh /usr/src/tools/install.sh -l s  ../numeric 
> /common/S4/obj/usr/src/amd64.amd64/obj-lib32/tmp/usr/include/c++/v1/tr1/numeric
> --- cddl/lib__L ---
> /common/S4/obj/usr/src/amd64.amd64/tmp/usr/bin/ld: error: can't create 
> dynamic relocation R_386_32 against symbol: __stack_chk_guard in readonly 
> segment; recompile object files with -fPIC
 defined in 
 /common/S4/obj/usr/src/amd64.amd64/obj-lib32/tmp/usr/lib32/libc.so.7
 referenced by skein.c
   skein.o:(Skein_256_Init) in archive 
 /common/S4/obj/usr/src/amd64.amd64/obj-lib32/tmp/usr/lib32/libmd.a
> 
> /common/S4/obj/usr/src/amd64.amd64/tmp/usr/bin/ld: error: can't create 
> dynamic relocation R_386_32 against local symbol in readonly segment; 
> recompile object files with -fPIC
 defined in 
 /common/S

Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Larry Rosenman
On Tue, Mar 06, 2018 at 10:16:36AM -0800, Rodney W. Grimes wrote:
> > On Tue, Mar 06, 2018 at 08:40:10AM -0800, Rodney W. Grimes wrote:
> > > > On Mon, 5 Mar 2018 14:39-0600, Larry Rosenman wrote:
> > > > 
> > > > > Upgraded to:
> > > > > 
> > > > > FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 
> > > > > r330385: Sun Mar  4 12:48:52 CST 2018 
> > > > > r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
> > > > > +1200060 1200060
> > > > > 
> > > > > Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP 
> > > > > use and swapping.
> > > > > 
> > > > > See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png
> > > > 
> > > > I see these symptoms on stable/11. One of my servers has 32 GiB of 
> > > > RAM. After a reboot all is well. ARC starts to fill up, and I still 
> > > > have more than half of the memory available for user processes.
> > > > 
> > > > After running the periodic jobs at night, the amount of wired memory 
> > > > goes sky high. /etc/periodic/weekly/310.locate is a particular nasty 
> > > > one.
> > > 
> > > I would like to find out if this is the same person I have
> > > reporting this problem from another source, or if this is
> > > a confirmation of a bug I was helping someone else with.
> > > 
> > > Have you been in contact with Michael Dexter about this
> > > issue, or any other forum/mailing list/etc?  
> > Just IRC/Slack, with no response.
> > > 
> > > If not then we have at least 2 reports of this unbound
> > > wired memory growth, if so hopefully someone here can
> > > take you further in the debug than we have been able
> > > to get.
> > What can I provide?  The system is still in this state as the full backup 
> > is slow.
> 
> One place to look is to see if this is the recently fixed:
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=88
> g_bio leak.
> 
> vmstat -z | egrep 'ITEM|g_bio|UMA'
> 
> would be a good first look
> 
borg.lerctr.org /home/ler $ vmstat -z | egrep 'ITEM|g_bio|UMA'
ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP
UMA Kegs:   280,  0, 346,   5, 560,   0,   0
UMA Zones: 1928,  0, 363,   1, 577,   0,   0
UMA Slabs:  112,  0,25384098,  977762,102033225,   0,   0
UMA Hash:   256,  0,  59,  16, 105,   0,   0
g_bio:  384,  0,  33,1627,542482056,   0,   0
borg.lerctr.org /home/ler $
> > > > Limiting the ARC to, say, 16 GiB, has no effect of the high amount of 
> > > > wired memory. After a few more days, the kernel consumes virtually all 
> > > > memory, forcing processes in and out of the swap device.
> > > 
> > > Our experience as well.
> > > 
> > > ...
> > > 
> > > Thanks,
> > > Rod Grimes 
> > > rgri...@freebsd.org
> > Larry Rosenman http://www.lerctr.org/~ler
> 
> -- 
> Rod Grimes rgri...@freebsd.org

-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


signature.asc
Description: PGP signature


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Matthew D. Fuller
On Mon, Mar 05, 2018 at 02:39:18PM -0600 I heard the voice of
Larry Rosenman, and lo! it spake thus:
> 
> Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP
> use and swapping.
> 
> Ideas?

Since I updated to the Feb 25 -CURRENT I'm currently running (from
mid-Sept, I believe), I see similar.  It seems like the ARC has gotten
really unwilling to yield, so it grows up in size, and then doesn't
let up under pressure.  I saw programs being actively used constantly
swapping their working set in and out, since they were left with tiny
available memory.

Hard-slappping vfs.zfs.arc_max down a ways mitigated it enough to get
me through the days, but is a pretty gross hackaround...


-- 
Matthew Fuller (MF4839)   |  fulle...@over-yonder.net
Systems/Network Administrator |  http://www.over-yonder.net/~fullermd/
   On the Internet, nobody can hear you scream.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Rodney W. Grimes
> On Tue, Mar 06, 2018 at 08:40:10AM -0800, Rodney W. Grimes wrote:
> > > On Mon, 5 Mar 2018 14:39-0600, Larry Rosenman wrote:
> > > 
> > > > Upgraded to:
> > > > 
> > > > FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 r330385: 
> > > > Sun Mar  4 12:48:52 CST 2018 
> > > > r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
> > > > +1200060 1200060
> > > > 
> > > > Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP 
> > > > use and swapping.
> > > > 
> > > > See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png
> > > 
> > > I see these symptoms on stable/11. One of my servers has 32 GiB of 
> > > RAM. After a reboot all is well. ARC starts to fill up, and I still 
> > > have more than half of the memory available for user processes.
> > > 
> > > After running the periodic jobs at night, the amount of wired memory 
> > > goes sky high. /etc/periodic/weekly/310.locate is a particular nasty 
> > > one.
> > 
> > I would like to find out if this is the same person I have
> > reporting this problem from another source, or if this is
> > a confirmation of a bug I was helping someone else with.
> > 
> > Have you been in contact with Michael Dexter about this
> > issue, or any other forum/mailing list/etc?  
> Just IRC/Slack, with no response.
> > 
> > If not then we have at least 2 reports of this unbound
> > wired memory growth, if so hopefully someone here can
> > take you further in the debug than we have been able
> > to get.
> What can I provide?  The system is still in this state as the full backup is 
> slow.

One place to look is to see if this is the recently fixed:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=88
g_bio leak.

vmstat -z | egrep 'ITEM|g_bio|UMA'

would be a good first look

> > > Limiting the ARC to, say, 16 GiB, has no effect of the high amount of 
> > > wired memory. After a few more days, the kernel consumes virtually all 
> > > memory, forcing processes in and out of the swap device.
> > 
> > Our experience as well.
> > 
> > ...
> > 
> > Thanks,
> > Rod Grimes 
> > rgri...@freebsd.org
> Larry Rosenman http://www.lerctr.org/~ler

-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Larry Rosenman
On Tue, Mar 06, 2018 at 08:40:10AM -0800, Rodney W. Grimes wrote:
> > On Mon, 5 Mar 2018 14:39-0600, Larry Rosenman wrote:
> > 
> > > Upgraded to:
> > > 
> > > FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 r330385: 
> > > Sun Mar  4 12:48:52 CST 2018 
> > > r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
> > > +1200060 1200060
> > > 
> > > Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP use 
> > > and swapping.
> > > 
> > > See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png
> > 
> > I see these symptoms on stable/11. One of my servers has 32 GiB of 
> > RAM. After a reboot all is well. ARC starts to fill up, and I still 
> > have more than half of the memory available for user processes.
> > 
> > After running the periodic jobs at night, the amount of wired memory 
> > goes sky high. /etc/periodic/weekly/310.locate is a particular nasty 
> > one.
> 
> I would like to find out if this is the same person I have
> reporting this problem from another source, or if this is
> a confirmation of a bug I was helping someone else with.
> 
> Have you been in contact with Michael Dexter about this
> issue, or any other forum/mailing list/etc?  
Just IRC/Slack, with no response.
> 
> If not then we have at least 2 reports of this unbound
> wired memory growth, if so hopefully someone here can
> take you further in the debug than we have been able
> to get.
What can I provide?  The system is still in this state as the full backup is 
slow.


> 
> > 
> > Limiting the ARC to, say, 16 GiB, has no effect of the high amount of 
> > wired memory. After a few more days, the kernel consumes virtually all 
> > memory, forcing processes in and out of the swap device.
> 
> Our experience as well.
> 
> ...
> 
> Thanks,
> -- 
> Rod Grimes rgri...@freebsd.org
-- 
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106


signature.asc
Description: PGP signature


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Rodney W. Grimes
> On Mon, 5 Mar 2018 14:39-0600, Larry Rosenman wrote:
> 
> > Upgraded to:
> > 
> > FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 r330385: Sun 
> > Mar  4 12:48:52 CST 2018 
> > r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
> > +1200060 1200060
> > 
> > Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP use 
> > and swapping.
> > 
> > See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png
> 
> I see these symptoms on stable/11. One of my servers has 32 GiB of 
> RAM. After a reboot all is well. ARC starts to fill up, and I still 
> have more than half of the memory available for user processes.
> 
> After running the periodic jobs at night, the amount of wired memory 
> goes sky high. /etc/periodic/weekly/310.locate is a particular nasty 
> one.

I would like to find out if this is the same person I have
reporting this problem from another source, or if this is
a confirmation of a bug I was helping someone else with.

Have you been in contact with Michael Dexter about this
issue, or any other forum/mailing list/etc?  

If not then we have at least 2 reports of this unbound
wired memory growth, if so hopefully someone here can
take you further in the debug than we have been able
to get.

> 
> Limiting the ARC to, say, 16 GiB, has no effect of the high amount of 
> wired memory. After a few more days, the kernel consumes virtually all 
> memory, forcing processes in and out of the swap device.

Our experience as well.

...

Thanks,
-- 
Rod Grimes rgri...@freebsd.org
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


DTrace suddenly running out of scratch space.

2018-03-06 Thread raichoo
Hi,

I'm encountering an issue with recent builds of FreeBSD CURRENT that
haven't been
present by the end of last year.

I gave a presentation at 34c3 where I demoed using DTrace to identify code
that is
susceptible to timing side channel attacks. The script is rather simple but
worked fine back
then.

 #pragma D option dynvarsize=512m

int len;

BEGIN
{
  len = 0;
}

pid$$target:authenticate:check:entry
{
  self->enter = vtimestamp;
  self->arg = copyinstr(arg0);
}

pid$$target:authenticate:check:return
/self->enter/
{
  @timing[self->arg] = lquantize(vtimestamp - self->enter, 700, 800, 10);
  if (strlen(self->arg) != len) {
len = strlen(self->arg);
trunc(@timing);
  }
  self->enter = 0;
}

pid$$target:authenticate:check:return
/arg1 == 1/
{
  printf("Password is: %s\n", self->arg);
  exit(0);
}

pid$$target:authenticate:check:return
{
  self->arg = 0;
}

tick-3s
{
  printa(@timing);
}

It basically measures the time it takes to compare 2 strings, nothing
fancy. For some
reason dtrace now reports the following when I run this script:

dtrace: error on enabled probe ID 2 (ID 76791:
pid3282:authenticate:check:entry): out of scratch space in action #2 at DIF
offset 12
dtrace: error on enabled probe ID 7 (ID 76792:
pid3282:authenticate:check:return): invalid address (0x0) in action #1 at
DIF offset 24

I'm not quite sure where this is coming from. Maybe the script was wrong in
the first place
and recent changes are reacting to that, but to me it seems as is the
aggregations are not
getting cleaned up properly.

Kind regards,
raichoo
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Stefan Esser
Am 05.03.18 um 21:39 schrieb Larry Rosenman:
> Upgraded to:
> 
> FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 r330385: Sun 
> Mar  4 12:48:52 CST 2018 
> r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
> +1200060 1200060
> 
> Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP use and 
> swapping.
> 
> See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png
> 
> Ideas?

I'm seeing the same, and currently work around this with a reasonably limited
vfs.zfs.arc_max.

Without such a limit I see (on a system with 24 GB RAM):

CPU:  0.3% user,  0.0% nice,  0.9% system,  0.1% interrupt, 98.8% idle
Mem: 14M Active, 1228K Inact, 32K Laundry, 23G Wired, 376M Free
ARC: 19G Total, 3935M MFU, 14G MRU, 82M Anon, 223M Header, 876M Other
 18G Compressed, 36G Uncompressed, 2.02:1 Ratio
Swap: 24G Total, 888M Used, 23G Free, 3% Inuse, 8892K In, 5136K Out

sysctl vfs.zfs.arc_max=15988656640 results in:

Mem: 129M Active, 72M Inact, 36K Laundry, 18G Wired, 5149M Free
ARC: 15G Total, 3997M MFU, 10G MRU, 40M Anon, 205M Header, 877M Other
 13G Compressed, 28G Uncompressed, 2.08:1 Ratio
Swap: 24G Total, 796M Used, 23G Free, 3% Inuse, 16K In

The system was mostly idle at both times, just some Samba traffic and
mail being checked by spamassassin. And I noticed it (this time) when
the spamassassin processes were aborted due to a time limit.

I think that this problem must have been introduced in the last few
weeks, but cannot give a better estimate (do not reboot that often).

But I had already applied the arc_max setting a week ago (and had not
put it in sysctl.conf in the hope that the ARC growth was a temporary
problem in the ZFS code, soon to be fixed ...).

Regards, STefan
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-06 Thread Trond Endrestøl
On Mon, 5 Mar 2018 14:39-0600, Larry Rosenman wrote:

> Upgraded to:
> 
> FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 r330385: Sun 
> Mar  4 12:48:52 CST 2018 
> r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
> +1200060 1200060
> 
> Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP use and 
> swapping.
> 
> See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png

I see these symptoms on stable/11. One of my servers has 32 GiB of 
RAM. After a reboot all is well. ARC starts to fill up, and I still 
have more than half of the memory available for user processes.

After running the periodic jobs at night, the amount of wired memory 
goes sky high. /etc/periodic/weekly/310.locate is a particular nasty 
one.

Limiting the ARC to, say, 16 GiB, has no effect of the high amount of 
wired memory. After a few more days, the kernel consumes virtually all 
memory, forcing processes in and out of the swap device.

stable/10 never exhibited these symptoms, even with ZFS.

I had hoped the kernel would manage its memory usage more wisely, but 
maybe it's time to set some hard limits on the kernel.

Last year, I experienced deadlocks on stable/11 systems running ZFS 
with only 1 GiB of RAM. periodic(8) and clang jobs would never be 
rescheduled, they just sat there doing nothing halfway through their 
mission and with most of their pages on the swap device. I was lucky 
enough to be able to log in and reboot the damned servers. I installed 
8 GiB of memory in each server and I never saw any deadlocks since.

Maybe we should try and help by run (virtual) machines with low 
amounts of memory and high loads to weed out these bugs, if they still 
persist.

-- 
Trond.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"