Re: Clang-6 and GNUisms.

2018-03-11 Thread Mark Linimon
The problem is even worse on armv6/armv7/aarch64, and much worse on
powerpc64/sparc64, which still have gcc in base.

I have not been saving up the emails where ports committers have been
fixing various failure modes.  I hesitate to start making harmless-
seeming patches myself for fear of my non-existant C++ skills.

I would be glad to help someone write up some documentation.  This
affects hundreds of port builds right now -- I have not even caught
up yet on tracking them all.

mcl
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-11 Thread Jeff Roberson

On Sun, 11 Mar 2018, O. Hartmann wrote:


Am Wed, 7 Mar 2018 14:39:13 +0400
Roman Bogorodskiy  schrieb:


  Danilo G. Baio wrote:


On Tue, Mar 06, 2018 at 01:36:45PM -0600, Larry Rosenman wrote:

On Tue, Mar 06, 2018 at 10:16:36AM -0800, Rodney W. Grimes wrote:

On Tue, Mar 06, 2018 at 08:40:10AM -0800, Rodney W. Grimes wrote:

On Mon, 5 Mar 2018 14:39-0600, Larry Rosenman wrote:


Upgraded to:

FreeBSD borg.lerctr.org 12.0-CURRENT FreeBSD 12.0-CURRENT #11 r330385:
Sun Mar  4 12:48:52 CST 2018
r...@borg.lerctr.org:/usr/obj/usr/src/amd64.amd64/sys/VT-LER  amd64
+1200060 1200060

Yesterday, and I'm seeing really strange slowness, ARC use, and SWAP use
and swapping.

See http://www.lerctr.org/~ler/FreeBSD/Swapuse.png


I see these symptoms on stable/11. One of my servers has 32 GiB of
RAM. After a reboot all is well. ARC starts to fill up, and I still
have more than half of the memory available for user processes.

After running the periodic jobs at night, the amount of wired memory
goes sky high. /etc/periodic/weekly/310.locate is a particular nasty
one.


I would like to find out if this is the same person I have
reporting this problem from another source, or if this is
a confirmation of a bug I was helping someone else with.

Have you been in contact with Michael Dexter about this
issue, or any other forum/mailing list/etc?

Just IRC/Slack, with no response.


If not then we have at least 2 reports of this unbound
wired memory growth, if so hopefully someone here can
take you further in the debug than we have been able
to get.

What can I provide?  The system is still in this state as the full backup is
slow.


One place to look is to see if this is the recently fixed:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=88
g_bio leak.

vmstat -z | egrep 'ITEM|g_bio|UMA'

would be a good first look


borg.lerctr.org /home/ler $ vmstat -z | egrep 'ITEM|g_bio|UMA'
ITEM   SIZE  LIMIT USED FREE  REQ FAIL SLEEP
UMA Kegs:   280,  0, 346,   5, 560,   0,   0
UMA Zones: 1928,  0, 363,   1, 577,   0,   0
UMA Slabs:  112,  0,25384098,  977762,102033225,   0,   0
UMA Hash:   256,  0,  59,  16, 105,   0,   0
g_bio:  384,  0,  33,1627,542482056,   0,   0
borg.lerctr.org /home/ler $

Limiting the ARC to, say, 16 GiB, has no effect of the high amount of
wired memory. After a few more days, the kernel consumes virtually all
memory, forcing processes in and out of the swap device.


Our experience as well.

...

Thanks,
Rod Grimes
rgri...@freebsd.org

Larry Rosenman http://www.lerctr.org/~ler


--
Rod Grimes rgri...@freebsd.org


--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 E-Mail: l...@lerctr.org
US Mail: 5708 Sabbia Drive, Round Rock, TX 78665-2106



Hi.

I noticed this behavior as well and changed vfs.zfs.arc_max for a smaller size.

For me it started when I upgraded to 1200058, in this box I'm only using
poudriere for building tests.


I've noticed that as well.

I have 16G of RAM and two disks, the first one is UFS with the system
installation and the second one is ZFS which I use to store media and
data files and for poudreire.

I don't recall the exact date, but it started fairly recently. System would
swap like crazy to a point when I cannot even ssh to it, and can hardly
login through tty: it might take 10-15 minutes to see a command typed in
the shell.

I've updated loader.conf to have the following:

vfs.zfs.arc_max="4G"
vfs.zfs.prefetch_disable="1"

It fixed the problem, but introduced a new one. When I'm building stuff
with poudriere with ccache enabled, it takes hours to build even small
projects like curl or gnutls.

For example, current build:

[10i386-default] [2018-03-07_07h44m45s] [parallel_build:] Queued: 3  Built: 1  
Failed:
0  Skipped: 0  Ignored: 0  Tobuild: 2   Time: 06:48:35 [02]: security/gnutls
| gnutls-3.5.18 build   (06:47:51)

Almost 7 hours already and still going!

gstat output looks like this:

dT: 1.002s  w: 1.000s
 L(q)  ops/sr/s   kBps   ms/rw/s   kBps   ms/w   %busy Name
0  0  0  00.0  0  00.00.0  da0
0  1  0  00.0  11280.70.1  ada0
1106106439   64.6  0  00.0   98.8  ada1
0  1  0  00.0  11280.70.1  ada0s1
0  0  0  00.0  0  00.00.0  ada0s1a
0  0  0  00.0  0  00.00.0  ada0s1b
0  1  0  00.0  11280.70.1  ada0s1d

ada0 here is UFS driver, and ada1 is ZFS.


Regards.
--
Danilo G. Baio (dbaio)




Roman Bogorodskiy



This is from a APU, no ZFS, UFS on a small mSATA device, the APU (PCenigine) 
works as a
firewall, router, 

Clang-6 and GNUisms.

2018-03-11 Thread Ian FREISLICH
Hi

There's been some fallout in ports land since clang-6 around null
pointer arithmetic and casts.  I cannot think of a good reason for doing
the following but then I've not dabbled in the arcane much:

# define __INT_TO_PTR(P) ((P) + (char *) 0)

So far I've encountered these in lang/v8 and devel/avr-gcc.  I know it
just generates warnings, but GNUisms and -Werror abound.  Adding
-Wno-null-pointer-arithmetic and -Wno-vexing-parse to CFLAGS/CXXFLAGS
provides some relief but V8 still fails:

/usr/ports/lang/v8/work/v8-3.18.5/out/native/obj.target/v8_base.x64/src/type-info.o../src/stub-cache.cc:1477:33:
error: reinterpret_cast from 'nullptr_t' to 'char *' is not allowed
  : GetCodeWithFlags(flags, reinterpret_cast(NULL));

    ^

I haven't got avr-gcc to compile yet.

Ian

-- 
Ian Freislich


-- 

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-11 Thread Mark Millard
As I understand, O. Hartmann's report ( ohartmann at walstatt.org ) in:

https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068806.html

includes a system with a completely non-ZFS context: UFS only. Quoting that 
part:

> This is from a APU, no ZFS, UFS on a small mSATA device, the APU (PCenigine) 
> works as a
> firewall, router, PBX):
> 
> last pid:  9665;  load averages:  0.13,  0.13,  0.11
> up 3+06:53:55  00:26:26 19 processes:  1 running, 18 sleeping CPU:  0.3% 
> user,  0.0%
> nice,  0.2% system,  0.0% interrupt, 99.5% idle Mem: 27M Active, 6200K Inact, 
> 83M
> Laundry, 185M Wired, 128K Buf, 675M Free Swap: 7808M Total, 2856K Used, 7805M 
> Free
> [...]
> 
> The APU is running CURRENT ( FreeBSD 12.0-CURRENT #42 r330608: Wed Mar  7 
> 16:55:59 CET
> 2018 amd64). Usually, the APU never(!) uses swap, now it is starting to swap 
> like hell
> for a couple of days and I have to reboot it failty often.

Unless this is unrelated, it would suggest that ZFS and its ARC need not
be involved.

Would what you are investigating relative to your "NUMA and concurrency
related work" fit with such a non-ZFS (no-ARC) context?

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-11 Thread Matthew D. Fuller
On Sun, Mar 11, 2018 at 10:43:58AM -1000 I heard the voice of
Jeff Roberson, and lo! it spake thus:
> 
> First, I would like to identify whether the wired memory is in the
> buffer cache.  Can those of you that have a repro look at sysctl
> vfs.bufspace and tell me if that accounts for the bulk of your wired
> memory usage?  I'm wondering if a job ran that pulled in all of the
> bufs from your root disk and filled up the buffer cache which
> doesn't have a back-pressure mechanism.

If by "root disk", you mean the one that isn't ZFS, that wouldn't
touch anything here; apart from a md-backed UFS /tmp and some NFS
mounts, everything on my system is ZFS.

I believe vfs.bufspace is what shows up as "Buf" on top?  I don't
recall it looking particularly interesting when things were madly
swapping.  I'll uncork arc_max again for a bit and see if anything odd
shows up in it, but it's only a dozen megs or so now.



-- 
Matthew Fuller (MF4839)   |  fulle...@over-yonder.net
Systems/Network Administrator |  http://www.over-yonder.net/~fullermd/
   On the Internet, nobody can hear you scream.
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-11 Thread Tom Rushworth
Hi All,

On 11/03/2018 13:43, Jeff Roberson wrote:
[snip]
> 
> Hi Folks,
> 
> This could be my fault from recent NUMA and concurrency related work.  I
> did touch some of the arc back-pressure mechanisms.  First, I would like
> to identify whether the wired memory is in the buffer cache.  Can those
> of you that have a repro look at sysctl vfs.bufspace and tell me if that
> accounts for the bulk of your wired memory usage?  I'm wondering if a
> job ran that pulled in all of the bufs from your root disk and filled up
> the buffer cache which doesn't have a back-pressure mechanism.  Then arc
> didn't respond appropriately to lower its usage.
> 
> Also, if you could try going back to r328953 or r326346 and let me know
> if the problem exists in either.  That would be very helpful.  If anyone
> is willing to debug this with me contact me directly and I will send
> some test patches or debugging info after you have done the above steps.
> 
> Thank you for the reports.
> 
> Jeff
[snip]

I'm seeing this on 11.1 stable r330126 with 32G of memory.  I have two
physical storage devices (one SSD, one HD) each a separate ZFS pool and
I can reproduce this fairly easily and quickly with:

   cp -r  

The directory being copied has about 25G (from du -sg), I end up with
16G wired after starting with less than 1G.  After the copy:
   sysctl vfs.bufspace  --> 0

Out of curiosity I copied it back the other way and drove the wired
memory to 26G during the copy falling back to 24G once the copy
finished, with vfs.bufspace at 0.

I'm not really in a good position to roll back to r328953 (or anything
much earlier), my graphics HW (i915) needs something pretty recent.

I am running a custom kernel (I dropped a lot of the newtwork
interfaces), so if you need more info I'm willing to help, as long as
you explain what you need in short words :).  (I'm not very familiar
with FreeBSD kernel work or sysadmin.)

Regards,

-- 
Tom Rushworth
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: Strange ARC/Swap/CPU on yesterday's -CURRENT

2018-03-11 Thread Jeff Roberson

On Sun, 11 Mar 2018, Mark Millard wrote:


As I understand, O. Hartmann's report ( ohartmann at walstatt.org ) in:

https://lists.freebsd.org/pipermail/freebsd-current/2018-March/068806.html

includes a system with a completely non-ZFS context: UFS only. Quoting that 
part:


This is from a APU, no ZFS, UFS on a small mSATA device, the APU (PCenigine) 
works as a
firewall, router, PBX):

last pid:  9665;  load averages:  0.13,  0.13,  0.11
up 3+06:53:55  00:26:26 19 processes:  1 running, 18 sleeping CPU:  0.3% user,  
0.0%
nice,  0.2% system,  0.0% interrupt, 99.5% idle Mem: 27M Active, 6200K Inact, 
83M
Laundry, 185M Wired, 128K Buf, 675M Free Swap: 7808M Total, 2856K Used, 7805M 
Free
[...]

The APU is running CURRENT ( FreeBSD 12.0-CURRENT #42 r330608: Wed Mar  7 
16:55:59 CET
2018 amd64). Usually, the APU never(!) uses swap, now it is starting to swap 
like hell
for a couple of days and I have to reboot it failty often.


Unless this is unrelated, it would suggest that ZFS and its ARC need not
be involved.

Would what you are investigating relative to your "NUMA and concurrency
related work" fit with such a non-ZFS (no-ARC) context?


I think there are probably two different bugs.  I believe the pid 
controller has caused the laundry thread to start being more aggressive 
causing more pageouts which would cause increased swap consumption.


The back-pressure mechanisms in arch should've resolved the other reports. 
It's possible that I broke those.  Although if the reports from 11.x are 
to be believed I don't know that it was me.  It is possible they have been 
broken at different times for different reasons.  So I will continue to 
look.


Thanks,
Jeff



===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Speed up CD/DVD-based FreeBSD

2018-03-11 Thread O. Hartmann
We have a special case of running FreeBSD (actually a NanoBSD) from a CD/DVD.
The reason behind using a CD/DVD is to prevent manipulations.

Now, after the GUI hass started, the system autologin a user and autostarts
Firefox. But starting Firefox takes ~ 5 - 7 minutes, while the operating system
takes 2 - 3 minutes. As you can imagine, this isn't quite a useful time.

Is there a simple way, considering enough RAM in the box, to speed up the
process? Loading Firefox makes the DVD drive moving the head very often - I
assume this is the fact because I can hear the head scratiching around very
intense.

Does FreeBSD bring tools/facilities onboard to achive what I'm requesting or do
I need an additional port/softwarepackage?

Any considerations are welcome.

Thanks in advance,

Oliver
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"