Re: what do jails map 127.0.0.1 to?

2019-02-10 Thread Alan Somers
On Sun, Feb 10, 2019 at 5:51 PM Rick Macklem  wrote:
>
> I am finally back to looking at an old PR#205193.
>
> The problem is that the nfsuserd daemon expects upcalls from the kernel
> that are from localhost (127.0.0.1) and when jails are running on the system,
> 127.0.0.1 is mapped to some other IP#. (I think it might be the address of the
> first net interface on the machine, but I'm not sure?)
>
> Is there a way that nfsuserd.c can find out what this IP# is?
> (I have a patch that converts nfsuserd.c to using an AF_LOCAL socket, but that
>  breaks for some setups. I think it was when the directory the socket was 
> being
>  created in is NFSv4 mounted, but I can't remember exactly how it fails.)
>
> Thanks for any help with this, rick

The easy way would be for nfsuserd to bind a socket to 127.0.0.1, then
use getsockname(2) to see what actual address it got bound to.
-Alan
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


what do jails map 127.0.0.1 to?

2019-02-10 Thread Rick Macklem
I am finally back to looking at an old PR#205193.

The problem is that the nfsuserd daemon expects upcalls from the kernel
that are from localhost (127.0.0.1) and when jails are running on the system,
127.0.0.1 is mapped to some other IP#. (I think it might be the address of the
first net interface on the machine, but I'm not sure?)

Is there a way that nfsuserd.c can find out what this IP# is?
(I have a patch that converts nfsuserd.c to using an AF_LOCAL socket, but that
 breaks for some setups. I think it was when the directory the socket was being
 created in is NFSv4 mounted, but I can't remember exactly how it fails.)

Thanks for any help with this, rick
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: 13.0-CURRENT oddity in output of dialog(1)

2019-02-10 Thread Alan Somers
On Sun, Feb 10, 2019 at 2:27 PM David Boyd  wrote:
>
> In 13.0-CURRENT, dialog(1) fills lines shorter than the box width with
> something other than the background color.  In an xterm session, the
> fill is white. In a console session, the fill is black.
>
> This appears to be a regression in the dialog(1) utility not seen in
> previous (11.2-RELEASE and 12.0-RELEASE) releases.
>
> To recreate:
>
> cat /etc/rc.conf | dialog --programbox 23 76
>
> No other problems have been observed in the dialog(1) utility.
>
> Thanks.
>
> David Boyd.

CC bapt@.  I've confirmed that the bug was introduced by r339488.
Unfortunately, that hardly narrows it down.  The diff is 49 kSLOC.
-Alan
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


13.0-CURRENT oddity in output of dialog(1)

2019-02-10 Thread David Boyd
In 13.0-CURRENT, dialog(1) fills lines shorter than the box width with 
something other than the background color.  In an xterm session, the
fill is white. In a console session, the fill is black.

This appears to be a regression in the dialog(1) utility not seen in
previous (11.2-RELEASE and 12.0-RELEASE) releases.

To recreate:

cat /etc/rc.conf | dialog --programbox 23 76

No other problems have been observed in the dialog(1) utility.

Thanks.

David Boyd.

___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: "Oddness" in head since around r343678 or so

2019-02-10 Thread Niclas Zeising

On 2019-02-10 16:35, Niclas Zeising wrote:

On 2019-02-08 10:27, Alexander Leidinger wrote:

Hi,

I recently noticed some generic slowness myself. I experienced this 
during replacing disks in a raidz by bigger ones. Long story short, 
check top -s if you have vnlru running for a long period at high 
CPU... If yes increase kern.maxvnodes (I increased to 10 times). Note, 
we should improve the admin page in the FAQ, the vnlru entry could 
need a little bit more hints and explanations.


If you encounter the same issue we have probably introduced a change 
somewhere with an unintended side effect.


Bye,
Alexander.



Hi!
I'm seeing this as well, on 13-CURRENT.  I updated a computer from the 
last January snapshot (30 or 31 of January, I can't remember) and it 
seems disk IO is very slow.  I remember having a svn checkout taking a 
very long time, with the SVN process pegged at 100% according to top.  I 
can't see the vnlru process running though, but I haven't looked 
closely, and I haven't tried the maxvnodes workaround.  Something has 
changed though.
This is systems using ZFS, both mirror and single disk.  Gstat shows 
disks are mostly idle.


I know this is a lousy bug report, but this, and the feeling that things 
are slower than usual, is what I have for now.

Regards


Hi!
I did some more digging.  In short, disabling options COVERAGE and 
options KCOV made my test case much faster.


My test:
boot system
create a new zfs dataset (zroot/home/test in my case)
time a checkout of https://svn.freebsd.org/base/head, putting the files 
in the new zfs dataset.


This is in no way scientific, since I only ran the test once on each 
kernel, and using something on the network means I'm susceptible to 
varying network speeds and so on, but.
In this specific scenario, using a kernel without those options, it's 
about 3 times faster than with, at least on the computer where I ran the 
tests.


I noticed in the commit log that the coverage and kcov options has been 
disabled again, albeit for a different reason.  Perhaps they should 
remain off, unless the extra runtime overhead can be disabled in 
runtime, similar to witness.

Regards
--
Niclas
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: "Oddness" in head since around r343678 or so

2019-02-10 Thread Niclas Zeising

On 2019-02-08 10:27, Alexander Leidinger wrote:

Hi,

I recently noticed some generic slowness myself. I experienced this 
during replacing disks in a raidz by bigger ones. Long story short, 
check top -s if you have vnlru running for a long period at high CPU... 
If yes increase kern.maxvnodes (I increased to 10 times). Note, we 
should improve the admin page in the FAQ, the vnlru entry could need a 
little bit more hints and explanations.


If you encounter the same issue we have probably introduced a change 
somewhere with an unintended side effect.


Bye,
Alexander.



Hi!
I'm seeing this as well, on 13-CURRENT.  I updated a computer from the 
last January snapshot (30 or 31 of January, I can't remember) and it 
seems disk IO is very slow.  I remember having a svn checkout taking a 
very long time, with the SVN process pegged at 100% according to top.  I 
can't see the vnlru process running though, but I haven't looked 
closely, and I haven't tried the maxvnodes workaround.  Something has 
changed though.
This is systems using ZFS, both mirror and single disk.  Gstat shows 
disks are mostly idle.


I know this is a lousy bug report, but this, and the feeling that things 
are slower than usual, is what I have for now.

Regards
--
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


64-bit integer overflow computing user CPU time in calcru1() in kern_resource.c

2019-02-10 Thread sthaug
There is a 64-bit integer overflow computing user cpu time in calcru1()
in kern_resource.c. This was discovered because CPU statistics from
the PowerDNS-recursor name server stopped working (essentially, got
"stuck") after a while:

time_t  milliseconds
1547818832  301274008.418503
1547822864  301784302.665002
1547826896  302310096.107672
1547830928  302844638.859146
1547834960  303381189.070208
1547838992  303924399.662413
1547843024  304477529.572919
1547847056  305025750.193424
1547851088  305544141.140036
1547855120  306001630.092938
1547859152  306153010.535298
1547863184  306141696.00
1547867216  306141696.00
1547871248  306141696.00
1547875280  306141696.00
1547879312  306141696.00
1547883344  306141696.00
1547887376  306141696.00

Note that the number just stops increasing beyond 1547863184.

I complained about this on the Pdns-user mailing list,

https://mailman.powerdns.com/pipermail/pdns-users/2019-January/025739.html

and received help from Bert Hubert of PowerDNS to find

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=227689

and it definitely looks like this is the bug causing the disappearing
CPU statistics graphs.

I fixed the problem by following the link from the FreeBSD bug above to

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=76972

which has an attachment (kern_resource.c.patch.txt) at

https://bz-attachments.freebsd.org/attachment.cgi?id=50537

After adding this patch and rebooting, the problem has not reoccurred.
I assume Bjorn Zeeb's patch has indeed fixed the problem.

Note that the problem was discovered in 11.2-STABLE r338949 - however,
looking at

https://svnweb.freebsd.org/base/head/sys/kern/kern_resource.c?view=markup

exactly the same user CPU time code seems to be present in HEAD, so I
assume the same overflow is also present.

I sent a message about this problem on the FreeBSD-stable mailing list
recently,

https://lists.freebsd.org/pipermail/freebsd-stable/2019-February/090523.html

but with no reaction there I'm now trying FreeBSD-current. I'm hoping
for bz's patch to be applied to HEAD, and at some point an MFC to
11.2-STABLE.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"