Re: CURRENT slow and shaky network stability

2016-04-02 Thread Cy Schubert
In message <20160402113910.14de7eaf.ohart...@zedat.fu-berlin.de>, "O. 
Hartmann"
 writes:
> --Sig_/cnPyYwlIcD24/.m6dd2EX7j
> Content-Type: text/plain; charset=US-ASCII
> Content-Transfer-Encoding: quoted-printable
> 
> Am Sat, 2 Apr 2016 10:55:03 +0200
> "O. Hartmann"  schrieb:
> 
> > Am Sat, 02 Apr 2016 01:07:55 -0700
> > Cy Schubert  schrieb:
> >=20
> > > In message <56f6c6b0.6010...@protected-networks.net>, Michael Butler wr=
> ites: =20
> > > > -current is not great for interactive use at all. The strategy of
> > > > pre-emptively dropping idle processes to swap is hurting .. big time.=
>=20
> > >=20
> > > FreeBSD doesn't "preemptively" or arbitrarily push pages out to disk. L=
> RU=20
> > > doesn't do this.
> > >  =20
> > > >=20
> > > > Compare inactive memory to swap in this example ..
> > > >=20
> > > > 110 processes: 1 running, 108 sleeping, 1 zombie
> > > > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% idle
> > > > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > > > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse   =20
> > >=20
> > > To analyze this you need to capture vmstat output. You'll see the free =
> pool=20
> > > dip below a threshold and pages go out to disk in response. If you have=
> =20
> > > daemons with small working sets, pages that are not part of the working=
> =20
> > > sets for daemons or applications will eventually be paged out. This is =
> not=20
> > > a bad thing. In your example above, the 281 MB of UFS buffers are more=
> =20
> > > active than the 917 MB paged out. If it's paged out and never used agai=
> n,=20
> > > then it doesn't hurt. However the 281 MB of buffers saves you I/O. The=
> =20
> > > inactive pages are part of your free pool that were active at one time =
> but=20
> > > now are not. They may be reclaimed and if they are, you've just saved m=
> ore=20
> > > I/O.
> > >=20
> > > Top is a poor tool to analyze memory use. Vmstat is the better tool to =
> help=20
> > > understand memory use. Inactive memory isn't a bad thing per se. Monito=
> r=20
> > > page outs, scan rate and page reclaims.
> > >=20
> > >  =20
> >=20
> > I give up! Tried to check via ssh/vmstat what is going on. Last lines bef=
> ore broken
> > pipe:
> >=20
> > [...]
> > procs  memory   pagedisks faults cpu
> > r b w  avm   fre   flt  re  pi  pofr   sr ad0 ad1   insycs us=
>  sy id
> > 22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907  5400 =
> 95  5  0
> > 22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869  3459 =
> 93  7  0
> > 15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192  4366 =
> 91  9  0
> > 14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209  4368 =
> 88 12  0
> > 26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569 70435=
> 9 87 13  0
> > 29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337 48486=
> 1 93  7  0
> > 27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131 44407 =
> 95  5  0
> > 36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366 38060 =
> 89 11  0
> > 35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371  4999=
>  85 15  0
> > 34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142  4431 =
> 95  5  0
> > Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe
> >=20
> >=20
> > This makes this crap system completely unusable. The server (FreeBSD 11.0=
> -CURRENT #20
> > r297503: Sat Apr  2 09:02:41 CEST 2016 amd64) in question did poudriere b=
> ulk job. I can
> > not even determine what terminal goes down first - another one, much more=
>  time idle than
> > the one shwoing the "vmstat 5" output, is still alive!=20
> >=20
> > i consider this a serious bug and it is no benefit what happened since th=
> is "fancy"
> > update. :-(
> 
> By the way - it might be of interest and some hint.
> 
> One of my boxes is acting as server and gateway. It utilises NAT, IPFW, whe=
> n it is under
> high load, as it was today, sometimes passing the network flow from ISP int=
> o the network
> for clients is extremely slow. I do not consider this the reason for collap=
> sing ssh
> sessions, since this incident happens also under no-load, but in the overal=
> l-view onto
> the problem, this could be a hint - I hope.=20

Natd is a critical part of your network infrastructure. rtprio 1 natd or 
rtprio 1 it after the fact. It won't hurt and it'll take this variable out 
of consideration, as much as we can.


-- 
Cheers,
Cy Schubert  or 
FreeBSD UNIX:     Web:  http://www.FreeBSD.org

The need of the many outweighs the greed of the few.



___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send 

Re: CURRENT slow and shaky network stability

2016-04-02 Thread Cy Schubert
In message <20160402231955.41b05526.ohart...@zedat.fu-berlin.de>, "O. 
Hartmann"
 writes:
> --Sig_/eJJPtbrEuK1nN2zIpc7BmVr
> Content-Type: text/plain; charset=US-ASCII
> Content-Transfer-Encoding: quoted-printable
> 
> Am Sat, 2 Apr 2016 11:39:10 +0200
> "O. Hartmann"  schrieb:
> 
> > Am Sat, 2 Apr 2016 10:55:03 +0200
> > "O. Hartmann"  schrieb:
> >=20
> > > Am Sat, 02 Apr 2016 01:07:55 -0700
> > > Cy Schubert  schrieb:
> > >  =20
> > > > In message <56f6c6b0.6010...@protected-networks.net>, Michael Butler =
> writes:   =20
> > > > > -current is not great for interactive use at all. The strategy of
> > > > > pre-emptively dropping idle processes to swap is hurting .. big tim=
> e. =20
> > > >=20
> > > > FreeBSD doesn't "preemptively" or arbitrarily push pages out to disk.=
>  LRU=20
> > > > doesn't do this.
> > > >=20
> > > > >=20
> > > > > Compare inactive memory to swap in this example ..
> > > > >=20
> > > > > 110 processes: 1 running, 108 sleeping, 1 zombie
> > > > > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% i=
> dle
> > > > > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > > > > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse =20
> > > >=20
> > > > To analyze this you need to capture vmstat output. You'll see the fre=
> e pool=20
> > > > dip below a threshold and pages go out to disk in response. If you ha=
> ve=20
> > > > daemons with small working sets, pages that are not part of the worki=
> ng=20
> > > > sets for daemons or applications will eventually be paged out. This i=
> s not=20
> > > > a bad thing. In your example above, the 281 MB of UFS buffers are mor=
> e=20
> > > > active than the 917 MB paged out. If it's paged out and never used ag=
> ain,=20
> > > > then it doesn't hurt. However the 281 MB of buffers saves you I/O. Th=
> e=20
> > > > inactive pages are part of your free pool that were active at one tim=
> e but=20
> > > > now are not. They may be reclaimed and if they are, you've just saved=
>  more=20
> > > > I/O.
> > > >=20
> > > > Top is a poor tool to analyze memory use. Vmstat is the better tool t=
> o help=20
> > > > understand memory use. Inactive memory isn't a bad thing per se. Moni=
> tor=20
> > > > page outs, scan rate and page reclaims.
> > > >=20
> > > >=20
> > >=20
> > > I give up! Tried to check via ssh/vmstat what is going on. Last lines b=
> efore broken
> > > pipe:
> > >=20
> > > [...]
> > > procs  memory   pagedisks faults cpu
> > > r b w  avm   fre   flt  re  pi  pofr   sr ad0 ad1   insycs =
> us sy id
> > > 22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907  540=
> 0 95  5  0
> > > 22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869  345=
> 9 93  7  0
> > > 15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192  436=
> 6 91  9  0
> > > 14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209  436=
> 8 88 12  0
> > > 26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569 704=
> 359 87 13  0
> > > 29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337 484=
> 861 93  7  0
> > > 27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131 4440=
> 7 95  5  0
> > > 36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366 3806=
> 0 89 11  0
> > > 35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371  49=
> 99 85 15  0
> > > 34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142  443=
> 1 95  5  0
> > > Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe
> > >=20
> > >=20
> > > This makes this crap system completely unusable. The server (FreeBSD 11=
> .0-CURRENT #20
> > > r297503: Sat Apr  2 09:02:41 CEST 2016 amd64) in question did poudriere=
>  bulk job. I
> > > can not even determine what terminal goes down first - another one, muc=
> h more time
> > > idle than the one shwoing the "vmstat 5" output, is still alive!=20
> > >=20
> > > i consider this a serious bug and it is no benefit what happened since =
> this "fancy"
> > > update. :-( =20
> >=20
> > By the way - it might be of interest and some hint.
> >=20
> > One of my boxes is acting as server and gateway. It utilises NAT, IPFW, w=
> hen it is under
> > high load, as it was today, sometimes passing the network flow from ISP i=
> nto the network
> > for clients is extremely slow. I do not consider this the reason for coll=
> apsing ssh
> > sessions, since this incident happens also under no-load, but in the over=
> all-view onto
> > the problem, this could be a hint - I hope.=20
> 
> I just checked on one box, that "broke pipe" very quickly after I started p=
> oudriere,
> while it did well a couple of hours before until the pipe broke. It seems i=
> t's load
> dependend when the ssh session gets wrecked, but more important, after the =
> long-haul
> poudriere run, I rebooted the 

Re: svn commit: r297435 - head: still problems for stage 3 when gcc 4.2.1 is avoided (powerpc64 self-hosted build)

2016-04-02 Thread Mark Millard
[My testing for the likes of the below does not yet extend outside powerpc64 
contexts.]

For the likes of self-hosted powerpc64-xtoolchain-gcc/powerpc64-gcc use with, 
say, gcc49 materials as the so-called "host" compiler tools I have not yet 
found a way around using the workaround:

> # ls -l /usr/lib/libstdc++.*
> lrwxr-xr-x  1 root  wheel  17 Feb 23 00:09 /usr/lib/libstdc++.a -> 
> /usr/lib/libc++.a
> lrwxr-xr-x  1 root  wheel  18 Feb 23 00:09 /usr/lib/libstdc++.so -> 
> /usr/lib/libc++.so



But I appear to now have a src.conf (or a family of them) that avoids needing 
workarounds in /usr/local/include and /usr/local/lib for filename conflicts. 
This is based on CC/CXX ("HOST") and XCC/XCXX ("CROSS") in src.conf being the 
likes of:

"HOST" (CC/CXX):
> CC=env C_INCLUDE_PATH=/usr/include /usr/local/bin/gcc49 -L/usr/lib
> CXX=env C_INCLUDE_PATH=/usr/include CPLUS_INCLUDE_PATH=/usr/include/c++/v1 
> /usr/local/bin/g++49 -std=c++11 -nostdinc++ -L/usr/lib

and. . .

"CROSS" (XCC/XCXX):
> TO_TYPE=powerpc64
> TOOLS_TO_TYPE=${TO_TYPE}
> . . .
> VERSION_CONTEXT=11.0
> . . .
> XCC=/usr/local/bin/${TOOLS_TO_TYPE}-portbld-freebsd${VERSION_CONTEXT}-gcc
> XCXX=/usr/local/bin/${TOOLS_TO_TYPE}-portbld-freebsd${VERSION_CONTEXT}-g++

In other words: CROSS use is already handled for /usr/local/. . . just given 
the compiler paths but some special src.conf adjustments beyond compiler paths 
were needed for HOST.

So far it appears that gcc49 materials can be used in "CROSS" and/or 
powerpc64-gcc materials can be used in "HOST" via just appropriate 
compiler-path editing. (I have jumped to testing -r297514 but am still doing 
various build tests. So this aspect is subject to updates.) It appears to be 
possible to use just one compiler/tool family --but in the 2 different ways-- 
instead of using a mix of 2 compiler/tool families.

Historically I've not gotten gcc variants to make a working lib32 for powerpc64 
and I've yet to retest this: So no claims about the lib32 status are implied 
here. (The problem was code in crtbeginS that arbitrarily used R30 in a way 
that the context was not set up for and so crtbeginS code was dereferencing 
arbitrary addresses.)


===
Mark Millard
markmi at dsl-only.net

On 2016-Apr-1, at 4:35 PM, Mark Millard  wrote:

[Just a top-post showing what powerpc64-xtoolchain-gcc/powerpc64-gcc has for 
the default include search places:]

powerpc64-xtoolchain-gcc/powerpc64-gcc also looks in /usr/local/include before 
/usr/include : see below.

> # portmaster --list-origins
> . . .
> devel/powerpc64-xtoolchain-gcc
> . . .
> 
> # /usr/local/bin/powerpc64-portbld-freebsd11.0-gcc --version
> powerpc64-portbld-freebsd11.0-gcc (FreeBSD Ports Collection for powerpc64) 
> 5.3.0
> Copyright (C) 2015 Free Software Foundation, Inc.
> This is free software; see the source for copying conditions.  There is NO
> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> 
> # echo '' |/usr/local/bin/powerpc64-portbld-freebsd11.0-gcc -v -x c++ - -o 
> /dev/null
> . . .
> ignoring nonexistent directory 
> "/usr/local/lib/gcc/powerpc64-portbld-freebsd11.0/5.3.0/../../../../include/c++/5.3.0"
> ignoring nonexistent directory 
> "/usr/local/lib/gcc/powerpc64-portbld-freebsd11.0/5.3.0/../../../../include/c++/5.3.0/powerpc64-portbld-freebsd11.0"
> ignoring nonexistent directory 
> "/usr/local/lib/gcc/powerpc64-portbld-freebsd11.0/5.3.0/../../../../include/c++/5.3.0/backward"
> ignoring nonexistent directory 
> "/usr/local/lib/gcc/powerpc64-portbld-freebsd11.0/5.3.0/../../../../powerpc64-portbld-freebsd11.0/include"
> #include "..." search starts here:
> #include <...> search starts here:
> /usr/local/lib/gcc/powerpc64-portbld-freebsd11.0/5.3.0/include
> /usr/local/include
> /usr/local/lib/gcc/powerpc64-portbld-freebsd11.0/5.3.0/include-fixed
> /usr/include
> End of search list.
> . . .


===
Mark Millard
markmi at dsl-only.net

On 2016-Apr-1, at 7:25 AM, Warner Losh  wrote:
> 
> 
> 
> On Fri, Apr 1, 2016 at 2:25 AM, Dimitry Andric  wrote:
> On 01 Apr 2016, at 00:44, Warner Losh  wrote:
>> 
>>> On Mar 31, 2016, at 4:34 PM, Bryan Drewery  wrote:
>>> I didn't realize the ports compiler was defaulting /usr/local/include
>>> into the search path now.  It does not have /usr/local/lib in the
>>> default library path as far as I can tell.  It's also broken for its
>>> -rpath (noted in its pkg-message).  So having a default
>>> /usr/local/include path seems odd.
>> 
>> It has for a while now. It’s one of the maddening inconsistencies that 
>> abound in this
>> area. I took a poll a while ago and there seemed to be widespread support 
>> for adding
>> it to the base compiler.
> 
> This was the main reason /usr/local/include was *not* included in the
> base compiler, otherwise it would unpredictably pick up headers in
> /usr/local/include during builds.  You can never know which conflicting
> headers a certain user has installed in /usr/local/include... 

Re: CURRENT slow and shaky network stability

2016-04-02 Thread Cy Schubert
In message 
, Kevin Oberman writes:
> --089e01176a5d71db0d052f8803c7
> Content-Type: text/plain; charset=UTF-8
> 
> On Sat, Apr 2, 2016 at 2:19 PM, O. Hartmann 
> wrote:
> 
> > Am Sat, 2 Apr 2016 11:39:10 +0200
> > "O. Hartmann"  schrieb:
> >
> > > Am Sat, 2 Apr 2016 10:55:03 +0200
> > > "O. Hartmann"  schrieb:
> > >
> > > > Am Sat, 02 Apr 2016 01:07:55 -0700
> > > > Cy Schubert  schrieb:
> > > >
> > > > > In message <56f6c6b0.6010...@protected-networks.net>, Michael
> > Butler writes:
> > > > > > -current is not great for interactive use at all. The strategy of
> > > > > > pre-emptively dropping idle processes to swap is hurting .. big
> > time.
> > > > >
> > > > > FreeBSD doesn't "preemptively" or arbitrarily push pages out to
> > disk. LRU
> > > > > doesn't do this.
> > > > >
> > > > > >
> > > > > > Compare inactive memory to swap in this example ..
> > > > > >
> > > > > > 110 processes: 1 running, 108 sleeping, 1 zombie
> > > > > > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5%
> > idle
> > > > > > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > > > > > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse
> > > > >
> > > > > To analyze this you need to capture vmstat output. You'll see the
> > free pool
> > > > > dip below a threshold and pages go out to disk in response. If you
> > have
> > > > > daemons with small working sets, pages that are not part of the
> > working
> > > > > sets for daemons or applications will eventually be paged out. This
> > is not
> > > > > a bad thing. In your example above, the 281 MB of UFS buffers are
> > more
> > > > > active than the 917 MB paged out. If it's paged out and never used
> > again,
> > > > > then it doesn't hurt. However the 281 MB of buffers saves you I/O.
> > The
> > > > > inactive pages are part of your free pool that were active at one
> > time but
> > > > > now are not. They may be reclaimed and if they are, you've just
> > saved more
> > > > > I/O.
> > > > >
> > > > > Top is a poor tool to analyze memory use. Vmstat is the better tool
> > to help
> > > > > understand memory use. Inactive memory isn't a bad thing per se.
> > Monitor
> > > > > page outs, scan rate and page reclaims.
> > > > >
> > > > >
> > > >
> > > > I give up! Tried to check via ssh/vmstat what is going on. Last lines
> > before broken
> > > > pipe:
> > > >
> > > > [...]
> > > > procs  memory   pagedisks faults
> >  cpu
> > > > r b w  avm   fre   flt  re  pi  pofr   sr ad0 ad1   insycs
> > us sy id
> > > > 22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907
> > 5400 95  5  0
> > > > 22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869
> > 3459 93  7  0
> > > > 15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192
> > 4366 91  9  0
> > > > 14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209
> > 4368 88 12  0
> > > > 26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569
> > 704359 87 13  0
> > > > 29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337
> > 484861 93  7  0
> > > > 27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131
> > 44407 95  5  0
> > > > 36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366
> > 38060 89 11  0
> > > > 35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371
> > 4999 85 15  0
> > > > 34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142
> > 4431 95  5  0
> > > > Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe
> > > >
> > > >
> > > > This makes this crap system completely unusable. The server (FreeBSD
> > 11.0-CURRENT #20
> > > > r297503: Sat Apr  2 09:02:41 CEST 2016 amd64) in question did
> > poudriere bulk job. I
> > > > can not even determine what terminal goes down first - another one,
> > much more time
> > > > idle than the one shwoing the "vmstat 5" output, is still alive!
> > > >
> > > > i consider this a serious bug and it is no benefit what happened since
> > this "fancy"
> > > > update. :-(
> > >
> > > By the way - it might be of interest and some hint.
> > >
> > > One of my boxes is acting as server and gateway. It utilises NAT, IPFW,
> > when it is under
> > > high load, as it was today, sometimes passing the network flow from ISP
> > into the network
> > > for clients is extremely slow. I do not consider this the reason for
> > collapsing ssh
> > > sessions, since this incident happens also under no-load, but in the
> > overall-view onto
> > > the problem, this could be a hint - I hope.
> >
> > I just checked on one box, that "broke pipe" very quickly after I started
> > poudriere,
> > while it did well a couple of hours before until the pipe broke. It seems
> > it's load
> > dependend when the ssh session gets wrecked, but 

Re: CURRENT slow and shaky network stability

2016-04-02 Thread Cy Schubert
In message <20160402105503.7ede5be1.ohart...@zedat.fu-berlin.de>, "O. 
Hartmann"
 writes:
> --Sig_/VIBPN0rbNwuyJuk=dxEGA+U
> Content-Type: text/plain; charset=US-ASCII
> Content-Transfer-Encoding: quoted-printable
> 
> Am Sat, 02 Apr 2016 01:07:55 -0700
> Cy Schubert  schrieb:
> 
> > In message <56f6c6b0.6010...@protected-networks.net>, Michael Butler writ=
> es:
> > > -current is not great for interactive use at all. The strategy of
> > > pre-emptively dropping idle processes to swap is hurting .. big time. =
> =20
> >=20
> > FreeBSD doesn't "preemptively" or arbitrarily push pages out to disk. LRU=
> =20
> > doesn't do this.
> >=20
> > >=20
> > > Compare inactive memory to swap in this example ..
> > >=20
> > > 110 processes: 1 running, 108 sleeping, 1 zombie
> > > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% idle
> > > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse =20
> >=20
> > To analyze this you need to capture vmstat output. You'll see the free po=
> ol=20
> > dip below a threshold and pages go out to disk in response. If you have=20
> > daemons with small working sets, pages that are not part of the working=20
> > sets for daemons or applications will eventually be paged out. This is no=
> t=20
> > a bad thing. In your example above, the 281 MB of UFS buffers are more=20
> > active than the 917 MB paged out. If it's paged out and never used again,=
> =20
> > then it doesn't hurt. However the 281 MB of buffers saves you I/O. The=20
> > inactive pages are part of your free pool that were active at one time bu=
> t=20
> > now are not. They may be reclaimed and if they are, you've just saved mor=
> e=20
> > I/O.
> >=20
> > Top is a poor tool to analyze memory use. Vmstat is the better tool to he=
> lp=20
> > understand memory use. Inactive memory isn't a bad thing per se. Monitor=
> =20
> > page outs, scan rate and page reclaims.
> >=20
> >=20
> 
> I give up! Tried to check via ssh/vmstat what is going on. Last lines befor=
> e broken pipe:
> 
> [...]
> procs  memory   pagedisks faults cpu
> r b w  avm   fre   flt  re  pi  pofr   sr ad0 ad1   insycs us s=
> y id
> 22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907  5400 95=
>   5  0
> 22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869  3459 93=
>   7  0
> 15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192  4366 91=
>   9  0
> 14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209  4368 88=
>  12  0
> 26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569 704359 =
> 87 13  0
> 29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337 484861 =
> 93  7  0
> 27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131 44407 95=
>   5  0
> 36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366 38060 89=
>  11  0
> 35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371  4999 8=
> 5 15  0
> 34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142  4431 95=
>   5  0
> Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe

How many CPUs does FreeBSD see? (CPUs being the number of cores and 
threads, i.e. my dual core intel has two threads so FreeBSD sees four CPUs.)

The load on the box shouldn't exceed more than two processes per CPU or you 
will notice performance issues. Ideally we look at load average first. If 
it's high then we check CPU%. If that looks good we look at memory and I/O. 
With the scant information at hand right now I see a possible CPU issue. 
Scan rate looks high but there's no paging so I'd consider it borderline.


-- 
Cheers,
Cy Schubert  or 
FreeBSD UNIX:     Web:  http://www.FreeBSD.org

The need of the many outweighs the greed of the few.




___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: CURRENT slow and shaky network stability

2016-04-02 Thread Kevin Oberman
On Sat, Apr 2, 2016 at 2:19 PM, O. Hartmann 
wrote:

> Am Sat, 2 Apr 2016 11:39:10 +0200
> "O. Hartmann"  schrieb:
>
> > Am Sat, 2 Apr 2016 10:55:03 +0200
> > "O. Hartmann"  schrieb:
> >
> > > Am Sat, 02 Apr 2016 01:07:55 -0700
> > > Cy Schubert  schrieb:
> > >
> > > > In message <56f6c6b0.6010...@protected-networks.net>, Michael
> Butler writes:
> > > > > -current is not great for interactive use at all. The strategy of
> > > > > pre-emptively dropping idle processes to swap is hurting .. big
> time.
> > > >
> > > > FreeBSD doesn't "preemptively" or arbitrarily push pages out to
> disk. LRU
> > > > doesn't do this.
> > > >
> > > > >
> > > > > Compare inactive memory to swap in this example ..
> > > > >
> > > > > 110 processes: 1 running, 108 sleeping, 1 zombie
> > > > > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5%
> idle
> > > > > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > > > > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse
> > > >
> > > > To analyze this you need to capture vmstat output. You'll see the
> free pool
> > > > dip below a threshold and pages go out to disk in response. If you
> have
> > > > daemons with small working sets, pages that are not part of the
> working
> > > > sets for daemons or applications will eventually be paged out. This
> is not
> > > > a bad thing. In your example above, the 281 MB of UFS buffers are
> more
> > > > active than the 917 MB paged out. If it's paged out and never used
> again,
> > > > then it doesn't hurt. However the 281 MB of buffers saves you I/O.
> The
> > > > inactive pages are part of your free pool that were active at one
> time but
> > > > now are not. They may be reclaimed and if they are, you've just
> saved more
> > > > I/O.
> > > >
> > > > Top is a poor tool to analyze memory use. Vmstat is the better tool
> to help
> > > > understand memory use. Inactive memory isn't a bad thing per se.
> Monitor
> > > > page outs, scan rate and page reclaims.
> > > >
> > > >
> > >
> > > I give up! Tried to check via ssh/vmstat what is going on. Last lines
> before broken
> > > pipe:
> > >
> > > [...]
> > > procs  memory   pagedisks faults
>  cpu
> > > r b w  avm   fre   flt  re  pi  pofr   sr ad0 ad1   insycs
> us sy id
> > > 22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907
> 5400 95  5  0
> > > 22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869
> 3459 93  7  0
> > > 15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192
> 4366 91  9  0
> > > 14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209
> 4368 88 12  0
> > > 26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569
> 704359 87 13  0
> > > 29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337
> 484861 93  7  0
> > > 27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131
> 44407 95  5  0
> > > 36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366
> 38060 89 11  0
> > > 35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371
> 4999 85 15  0
> > > 34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142
> 4431 95  5  0
> > > Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe
> > >
> > >
> > > This makes this crap system completely unusable. The server (FreeBSD
> 11.0-CURRENT #20
> > > r297503: Sat Apr  2 09:02:41 CEST 2016 amd64) in question did
> poudriere bulk job. I
> > > can not even determine what terminal goes down first - another one,
> much more time
> > > idle than the one shwoing the "vmstat 5" output, is still alive!
> > >
> > > i consider this a serious bug and it is no benefit what happened since
> this "fancy"
> > > update. :-(
> >
> > By the way - it might be of interest and some hint.
> >
> > One of my boxes is acting as server and gateway. It utilises NAT, IPFW,
> when it is under
> > high load, as it was today, sometimes passing the network flow from ISP
> into the network
> > for clients is extremely slow. I do not consider this the reason for
> collapsing ssh
> > sessions, since this incident happens also under no-load, but in the
> overall-view onto
> > the problem, this could be a hint - I hope.
>
> I just checked on one box, that "broke pipe" very quickly after I started
> poudriere,
> while it did well a couple of hours before until the pipe broke. It seems
> it's load
> dependend when the ssh session gets wrecked, but more important, after the
> long-haul
> poudriere run, I rebooted the box and tried again with the mentioned
> broken pipe after a
> couple of minutes after poudriere ran. Then I left the box for several
> hours and logged
> in again and checked the swap. Although there was for hours no load or
> other pressure,
> there were 31% of of swap used - still (box has 16 GB of RAM and is
> propelled by a XEON
> E3-1245 V2).
>


Re: CURRENT slow and shaky network stability

2016-04-02 Thread O. Hartmann
Am Sat, 2 Apr 2016 11:39:10 +0200
"O. Hartmann"  schrieb:

> Am Sat, 2 Apr 2016 10:55:03 +0200
> "O. Hartmann"  schrieb:
> 
> > Am Sat, 02 Apr 2016 01:07:55 -0700
> > Cy Schubert  schrieb:
> >   
> > > In message <56f6c6b0.6010...@protected-networks.net>, Michael Butler 
> > > writes:
> > > > -current is not great for interactive use at all. The strategy of
> > > > pre-emptively dropping idle processes to swap is hurting .. big time.   
> > > >
> > > 
> > > FreeBSD doesn't "preemptively" or arbitrarily push pages out to disk. LRU 
> > > doesn't do this.
> > > 
> > > > 
> > > > Compare inactive memory to swap in this example ..
> > > > 
> > > > 110 processes: 1 running, 108 sleeping, 1 zombie
> > > > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% idle
> > > > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > > > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse  
> > > 
> > > To analyze this you need to capture vmstat output. You'll see the free 
> > > pool 
> > > dip below a threshold and pages go out to disk in response. If you have 
> > > daemons with small working sets, pages that are not part of the working 
> > > sets for daemons or applications will eventually be paged out. This is 
> > > not 
> > > a bad thing. In your example above, the 281 MB of UFS buffers are more 
> > > active than the 917 MB paged out. If it's paged out and never used again, 
> > > then it doesn't hurt. However the 281 MB of buffers saves you I/O. The 
> > > inactive pages are part of your free pool that were active at one time 
> > > but 
> > > now are not. They may be reclaimed and if they are, you've just saved 
> > > more 
> > > I/O.
> > > 
> > > Top is a poor tool to analyze memory use. Vmstat is the better tool to 
> > > help 
> > > understand memory use. Inactive memory isn't a bad thing per se. Monitor 
> > > page outs, scan rate and page reclaims.
> > > 
> > > 
> > 
> > I give up! Tried to check via ssh/vmstat what is going on. Last lines 
> > before broken
> > pipe:
> > 
> > [...]
> > procs  memory   pagedisks faults cpu
> > r b w  avm   fre   flt  re  pi  pofr   sr ad0 ad1   insycs us 
> > sy id
> > 22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907  5400 95 
> >  5  0
> > 22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869  3459 93 
> >  7  0
> > 15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192  4366 91 
> >  9  0
> > 14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209  4368 88 
> > 12  0
> > 26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569 704359 
> > 87 13  0
> > 29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337 484861 
> > 93  7  0
> > 27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131 44407 95 
> >  5  0
> > 36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366 38060 89 
> > 11  0
> > 35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371  4999 
> > 85 15  0
> > 34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142  4431 95 
> >  5  0
> > Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe
> > 
> > 
> > This makes this crap system completely unusable. The server (FreeBSD 
> > 11.0-CURRENT #20
> > r297503: Sat Apr  2 09:02:41 CEST 2016 amd64) in question did poudriere 
> > bulk job. I
> > can not even determine what terminal goes down first - another one, much 
> > more time
> > idle than the one shwoing the "vmstat 5" output, is still alive! 
> > 
> > i consider this a serious bug and it is no benefit what happened since this 
> > "fancy"
> > update. :-(  
> 
> By the way - it might be of interest and some hint.
> 
> One of my boxes is acting as server and gateway. It utilises NAT, IPFW, when 
> it is under
> high load, as it was today, sometimes passing the network flow from ISP into 
> the network
> for clients is extremely slow. I do not consider this the reason for 
> collapsing ssh
> sessions, since this incident happens also under no-load, but in the 
> overall-view onto
> the problem, this could be a hint - I hope. 

I just checked on one box, that "broke pipe" very quickly after I started 
poudriere,
while it did well a couple of hours before until the pipe broke. It seems it's 
load
dependend when the ssh session gets wrecked, but more important, after the 
long-haul
poudriere run, I rebooted the box and tried again with the mentioned broken 
pipe after a
couple of minutes after poudriere ran. Then I left the box for several hours 
and logged
in again and checked the swap. Although there was for hours no load or other 
pressure,
there were 31% of of swap used - still (box has 16 GB of RAM and is propelled 
by a XEON
E3-1245 V2).


pgp47BIEtKjYN.pgp
Description: OpenPGP digital signature


Re: /usr/bin/make segmentation fault

2016-04-02 Thread Simon J. Gerraty
Roger Marquis  wrote:

> Don't know how to debug this and cannot post the Makefile in question but it

Can you provide something similar that triggers the issue?
It's rather hard to tell what's wrong without knowing what *should* be
happening.

> last worked in 8.4.  In 11-CURRENT Var_Value appears to return NULL at
> /usr/src/contrib/bmake/compat.c:621
> 
>   Var_Set(IMPSRC, Var_Value(TARGET, gn, ), pgn, 0);
> 
> which passes the NULL to Var_Set as the second argument (val) where,
> eventually, /usr/src/contrib/bmake/var.c:973

Which means TARGET hasn't been set, and there's probably something
rather interesting about your makefile ;-)
This should of course never happen.

Even if Var_Set checked the failure and returned harmlessly you have a
serious problem.

> Any and all pointers appreciated,

What is content of gn at

> #2  0x00402040 in Compat_Make (gnp=0x800a1c340,
> #pgnp=0x800a1df00)
> at /usr/src/contrib/bmake/compat.c:621
___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: [FreeBSD-Announce] FreeBSD 2.2.9 Released!

2016-04-02 Thread O. Hartmann
Am Sat, 2 Apr 2016 19:15:31 +0800
Ccs189  schrieb:

> Sorry,
> Just to Ask a newbie question, is this a newest version of FreeBSD ? Or it's 
> April
> 1 ... ?? 
> 
> Best regards,
> Chan

;-) Back to the '90s

> 
> 
> > On 2 Apr 2016, at 12:29 AM, Ruslan Ermilov  wrote:
> > 
> > Hi,
> > 
> > At Nginx, we are committed to supporting a wide range of
> > BSD-like operating systems and their releases.  It's our
> > great pleasure to report that it's now again possible to
> > build current nginx sources with FreeBSD 2.2.9.  We know
> > that many highload setups still use this FreeBSD version.
> >   
> >> On Fri, Apr 01, 2016 at 01:45:04PM +, Maxim Dounin wrote:
> >> details:   http://hg.nginx.org/nginx/rev/8426275a13fd
> >> branches:  
> >> changeset: 6500:8426275a13fd
> >> user:  Maxim Dounin 
> >> date:  Fri Apr 01 16:38:31 2016 +0300
> >> description:
> >> Compatibility with FreeBSD 2.2.9.
> >> 
> >> Added (RTLD_NOW | RTLD_GLOBAL) to dlopen() test.  There is no RTLD_GLOBAL
> >> on FreeBSD 2.2.9.
> >> 
> >> Added uint32_t test, with fallback to u_int32_t, similar to uint64_t one.
> >> Added fallback to u_int32_t in in_addr_t test.
> >> 
> >> With these changes it is now possible to compile nginx on FreeBSD 2.2.9
> >> with only few minor warnings (assuming -Wno-error).  
> >   
> >> On Sat, Apr 01, 2006 at 01:30:09PM -0700, Scott Long wrote:
> >> -BEGIN PGP SIGNED MESSAGE-
> >> Hash: SHA1
> >> 
> >> It is my great pleasure and privilege to announce the availability of
> >> FreeBSD 2.2.9-RELEASE.  This release is the culmination of SEVENTY-SEVEN
> >> months of tireless work by the FreeBSD developers, users, their children,
> >> and their pets.  Significant features in this release:
> >> 
> >> - - XFree86 3.3.3, the industry leader in support for cutting edge PCI
> >>  graphics adapters and 2D acceleration.
> >> - - The 8GB barrier in IDE drive sizes has finally been broken.  The wd(4)
> >>  driver now supports unimaginable sizes of 137GB on a single drive!
> >> - - Support for all of the latest high-speed FAST-WIDE (20MB/s) SCSI-2
> >>  controllers.
> >> - - The Linux emulator is now able to run Quake2 out-of-the-box.
> >> 
> >> A full description of the release can be found here:
> >> 
> >>  ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/i386/2.2.9-RELEASE/README.TXT
> >>  ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/i386/2.2.9-RELEASE/RELNOTES.TXT
> >> 
> >> 
> >> Availability
> >> -
> >> 
> >> FreeBSD 2.2.9-RELEASE supports the i386 architecture and can be installed
> >> directly over the net using bootable media or copied to a local NFS/FTP
> >> server.
> >> 
> >> Please continue to support the FreeBSD Project by purchasing media
> >> from one of our supporting vendors.  The following companies will be
> >> offering FreeBSD 2.2.9 based products:
> >> 
> >> ~   FreeBSD Mall, Inc.http://www.freebsdmall.com/
> >> ~   Daemonnews, Inc.  http://www.bsdmall.com/freebsd1.html
> >> 
> >> If you can't afford FreeBSD on media, are impatient, or just want to
> >> use it for evangelism purposes, then by all means download the ISO
> >> images.  We can't promise that all the mirror sites will carry the
> >> larger ISO images, but they will at least be available from the
> >> following sites.
> >> 
> >> FTP
> >> ---
> >> 
> >> At the time of this announcement the following FTP sites have FreeBSD
> >> 2.2.9-RELEASE available.
> >> 
> >>  ftp://ftp.FreeBSD.org/pub/FreeBSD/releases
> >> 
> >> FreeBSD is also available via anonymous FTP from mirror sites in the
> >> following countries: Argentina, Australia, Brazil, Bulgaria, Canada,
> >> China, Czech Republic, Denmark, Estonia, Finland, France, Germany,
> >> Hong Kong, Hungary, Iceland, Ireland, Japan, Korea, Lithuania,
> >> the Netherlands, New Zealand, Poland, Portugal, Romania,
> >> Russia, Saudi Arabia, South Africa, Slovak Republic, Slovenia, Spain,
> >> Sweden, Taiwan, Thailand, Ukraine, and the United Kingdom.
> >> 
> >> Before trying the central FTP site, please check your regional
> >> mirror(s) first by going to:
> >> 
> >> ftp://ftp..FreeBSD.org/pub/FreeBSD
> >> 
> >> Any additional mirror sites will be labeled ftp2, ftp3 and so on.
> >> 
> >> More information about FreeBSD mirror sites can be found at:
> >> 
> >> http://www.FreeBSD.org/doc/en_US.ISO8859-1/books/handbook/mirrors-ftp.html
> >> 
> >> For instructions on installing FreeBSD, please see Chapter 2 of The
> >> FreeBSD Handbook.  It provides a complete installation walk-through
> >> for users new to FreeBSD, and can be found online at:
> >> 
> >> http://www.FreeBSD.org/doc/en_US.ISO8859-1/books/handbook/install.html
> >> 
> >> Acknowledgments
> >> 
> >> 
> >> The release engineering team for 2.2.9-RELEASE includes:
> >> 
> >> Scott Long  Release Engineering
> >> Ruslan Ermilov I386, creative director
> >> Sniffy The Wonder Cat

Re: [FreeBSD-Announce] FreeBSD 2.2.9 Released!

2016-04-02 Thread Ccs189
Sorry,
Just to Ask a newbie question, is this a newest version of FreeBSD ? Or it's 
April 1 ... ?? 

Best regards,
Chan


> On 2 Apr 2016, at 12:29 AM, Ruslan Ermilov  wrote:
> 
> Hi,
> 
> At Nginx, we are committed to supporting a wide range of
> BSD-like operating systems and their releases.  It's our
> great pleasure to report that it's now again possible to
> build current nginx sources with FreeBSD 2.2.9.  We know
> that many highload setups still use this FreeBSD version.
> 
>> On Fri, Apr 01, 2016 at 01:45:04PM +, Maxim Dounin wrote:
>> details:   http://hg.nginx.org/nginx/rev/8426275a13fd
>> branches:  
>> changeset: 6500:8426275a13fd
>> user:  Maxim Dounin 
>> date:  Fri Apr 01 16:38:31 2016 +0300
>> description:
>> Compatibility with FreeBSD 2.2.9.
>> 
>> Added (RTLD_NOW | RTLD_GLOBAL) to dlopen() test.  There is no RTLD_GLOBAL
>> on FreeBSD 2.2.9.
>> 
>> Added uint32_t test, with fallback to u_int32_t, similar to uint64_t one.
>> Added fallback to u_int32_t in in_addr_t test.
>> 
>> With these changes it is now possible to compile nginx on FreeBSD 2.2.9
>> with only few minor warnings (assuming -Wno-error).
> 
>> On Sat, Apr 01, 2006 at 01:30:09PM -0700, Scott Long wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA1
>> 
>> It is my great pleasure and privilege to announce the availability of
>> FreeBSD 2.2.9-RELEASE.  This release is the culmination of SEVENTY-SEVEN
>> months of tireless work by the FreeBSD developers, users, their children,
>> and their pets.  Significant features in this release:
>> 
>> - - XFree86 3.3.3, the industry leader in support for cutting edge PCI
>>  graphics adapters and 2D acceleration.
>> - - The 8GB barrier in IDE drive sizes has finally been broken.  The wd(4)
>>  driver now supports unimaginable sizes of 137GB on a single drive!
>> - - Support for all of the latest high-speed FAST-WIDE (20MB/s) SCSI-2
>>  controllers.
>> - - The Linux emulator is now able to run Quake2 out-of-the-box.
>> 
>> A full description of the release can be found here:
>> 
>>  ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/i386/2.2.9-RELEASE/README.TXT
>>  ftp://ftp.FreeBSD.org/pub/FreeBSD/releases/i386/2.2.9-RELEASE/RELNOTES.TXT
>> 
>> 
>> Availability
>> -
>> 
>> FreeBSD 2.2.9-RELEASE supports the i386 architecture and can be installed
>> directly over the net using bootable media or copied to a local NFS/FTP
>> server.
>> 
>> Please continue to support the FreeBSD Project by purchasing media
>> from one of our supporting vendors.  The following companies will be
>> offering FreeBSD 2.2.9 based products:
>> 
>> ~   FreeBSD Mall, Inc.http://www.freebsdmall.com/
>> ~   Daemonnews, Inc.  http://www.bsdmall.com/freebsd1.html
>> 
>> If you can't afford FreeBSD on media, are impatient, or just want to
>> use it for evangelism purposes, then by all means download the ISO
>> images.  We can't promise that all the mirror sites will carry the
>> larger ISO images, but they will at least be available from the
>> following sites.
>> 
>> FTP
>> ---
>> 
>> At the time of this announcement the following FTP sites have FreeBSD
>> 2.2.9-RELEASE available.
>> 
>>  ftp://ftp.FreeBSD.org/pub/FreeBSD/releases
>> 
>> FreeBSD is also available via anonymous FTP from mirror sites in the
>> following countries: Argentina, Australia, Brazil, Bulgaria, Canada,
>> China, Czech Republic, Denmark, Estonia, Finland, France, Germany,
>> Hong Kong, Hungary, Iceland, Ireland, Japan, Korea, Lithuania,
>> the Netherlands, New Zealand, Poland, Portugal, Romania,
>> Russia, Saudi Arabia, South Africa, Slovak Republic, Slovenia, Spain,
>> Sweden, Taiwan, Thailand, Ukraine, and the United Kingdom.
>> 
>> Before trying the central FTP site, please check your regional
>> mirror(s) first by going to:
>> 
>> ftp://ftp..FreeBSD.org/pub/FreeBSD
>> 
>> Any additional mirror sites will be labeled ftp2, ftp3 and so on.
>> 
>> More information about FreeBSD mirror sites can be found at:
>> 
>> http://www.FreeBSD.org/doc/en_US.ISO8859-1/books/handbook/mirrors-ftp.html
>> 
>> For instructions on installing FreeBSD, please see Chapter 2 of The
>> FreeBSD Handbook.  It provides a complete installation walk-through
>> for users new to FreeBSD, and can be found online at:
>> 
>> http://www.FreeBSD.org/doc/en_US.ISO8859-1/books/handbook/install.html
>> 
>> Acknowledgments
>> 
>> 
>> The release engineering team for 2.2.9-RELEASE includes:
>> 
>> Scott Long  Release Engineering
>> Ruslan Ermilov I386, creative director
>> Sniffy The Wonder CatWarrm lap, typing assistance
>> Max The Dancing CatEarly morning wakeups, QA
>> Sammy The Tiny CatFace licks, QA
>> 
>> -BEGIN PGP SIGNATURE-
>> Version: GnuPG v1.4.2 (FreeBSD)
>> 
>> iD8DBQFELuG+HTr20QF8Xr8RAkoyAJ4nU4v9TK/Tjh8eEGbjNtGxmiVu0gCfcNtg
>> oNz6FNHVuv87MSKJeXJcMAU=
>> =Z5hh
>> -END PGP 

Re: CURRENT slow and shaky network stability

2016-04-02 Thread O. Hartmann
Am Sat, 2 Apr 2016 10:55:03 +0200
"O. Hartmann"  schrieb:

> Am Sat, 02 Apr 2016 01:07:55 -0700
> Cy Schubert  schrieb:
> 
> > In message <56f6c6b0.6010...@protected-networks.net>, Michael Butler 
> > writes:  
> > > -current is not great for interactive use at all. The strategy of
> > > pre-emptively dropping idle processes to swap is hurting .. big time.
> > 
> > FreeBSD doesn't "preemptively" or arbitrarily push pages out to disk. LRU 
> > doesn't do this.
> >   
> > > 
> > > Compare inactive memory to swap in this example ..
> > > 
> > > 110 processes: 1 running, 108 sleeping, 1 zombie
> > > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% idle
> > > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse
> > 
> > To analyze this you need to capture vmstat output. You'll see the free pool 
> > dip below a threshold and pages go out to disk in response. If you have 
> > daemons with small working sets, pages that are not part of the working 
> > sets for daemons or applications will eventually be paged out. This is not 
> > a bad thing. In your example above, the 281 MB of UFS buffers are more 
> > active than the 917 MB paged out. If it's paged out and never used again, 
> > then it doesn't hurt. However the 281 MB of buffers saves you I/O. The 
> > inactive pages are part of your free pool that were active at one time but 
> > now are not. They may be reclaimed and if they are, you've just saved more 
> > I/O.
> > 
> > Top is a poor tool to analyze memory use. Vmstat is the better tool to help 
> > understand memory use. Inactive memory isn't a bad thing per se. Monitor 
> > page outs, scan rate and page reclaims.
> > 
> >   
> 
> I give up! Tried to check via ssh/vmstat what is going on. Last lines before 
> broken
> pipe:
> 
> [...]
> procs  memory   pagedisks faults cpu
> r b w  avm   fre   flt  re  pi  pofr   sr ad0 ad1   insycs us sy 
> id
> 22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907  5400 95  
> 5  0
> 22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869  3459 93  
> 7  0
> 15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192  4366 91  
> 9  0
> 14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209  4368 88 
> 12  0
> 26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569 704359 87 
> 13  0
> 29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337 484861 93 
>  7  0
> 27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131 44407 95  
> 5  0
> 36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366 38060 89 
> 11  0
> 35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371  4999 85 
> 15  0
> 34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142  4431 95  
> 5  0
> Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe
> 
> 
> This makes this crap system completely unusable. The server (FreeBSD 
> 11.0-CURRENT #20
> r297503: Sat Apr  2 09:02:41 CEST 2016 amd64) in question did poudriere bulk 
> job. I can
> not even determine what terminal goes down first - another one, much more 
> time idle than
> the one shwoing the "vmstat 5" output, is still alive! 
> 
> i consider this a serious bug and it is no benefit what happened since this 
> "fancy"
> update. :-(

By the way - it might be of interest and some hint.

One of my boxes is acting as server and gateway. It utilises NAT, IPFW, when it 
is under
high load, as it was today, sometimes passing the network flow from ISP into 
the network
for clients is extremely slow. I do not consider this the reason for collapsing 
ssh
sessions, since this incident happens also under no-load, but in the 
overall-view onto
the problem, this could be a hint - I hope. 


pgpMxSUu4ZPmO.pgp
Description: OpenPGP digital signature


Re: CURRENT slow and shaky network stability

2016-04-02 Thread O. Hartmann
Am Sat, 02 Apr 2016 01:07:55 -0700
Cy Schubert  schrieb:

> In message <56f6c6b0.6010...@protected-networks.net>, Michael Butler writes:
> > -current is not great for interactive use at all. The strategy of
> > pre-emptively dropping idle processes to swap is hurting .. big time.  
> 
> FreeBSD doesn't "preemptively" or arbitrarily push pages out to disk. LRU 
> doesn't do this.
> 
> > 
> > Compare inactive memory to swap in this example ..
> > 
> > 110 processes: 1 running, 108 sleeping, 1 zombie
> > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% idle
> > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse  
> 
> To analyze this you need to capture vmstat output. You'll see the free pool 
> dip below a threshold and pages go out to disk in response. If you have 
> daemons with small working sets, pages that are not part of the working 
> sets for daemons or applications will eventually be paged out. This is not 
> a bad thing. In your example above, the 281 MB of UFS buffers are more 
> active than the 917 MB paged out. If it's paged out and never used again, 
> then it doesn't hurt. However the 281 MB of buffers saves you I/O. The 
> inactive pages are part of your free pool that were active at one time but 
> now are not. They may be reclaimed and if they are, you've just saved more 
> I/O.
> 
> Top is a poor tool to analyze memory use. Vmstat is the better tool to help 
> understand memory use. Inactive memory isn't a bad thing per se. Monitor 
> page outs, scan rate and page reclaims.
> 
> 

I give up! Tried to check via ssh/vmstat what is going on. Last lines before 
broken pipe:

[...]
procs  memory   pagedisks faults cpu
r b w  avm   fre   flt  re  pi  pofr   sr ad0 ad1   insycs us sy id
22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907  5400 95  5  0
22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869  3459 93  7  0
15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192  4366 91  9  0
14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209  4368 88 12  0
26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569 704359 87 
13  0
29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337 484861 93  
7  0
27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131 44407 95  5  0
36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366 38060 89 11  0
35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371  4999 85 15 
 0
34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142  4431 95  5  0
Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe


This makes this crap system completely unusable. The server (FreeBSD 
11.0-CURRENT #20
r297503: Sat Apr  2 09:02:41 CEST 2016 amd64) in question did poudriere bulk 
job. I can
not even determine what terminal goes down first - another one, much more time 
idle than
the one shwoing the "vmstat 5" output, is still alive! 

i consider this a serious bug and it is no benefit what happened since this 
"fancy"
update. :-(


pgp55Iqf0zTdq.pgp
Description: OpenPGP digital signature


Re: CURRENT slow and shaky network stability

2016-04-02 Thread Cy Schubert
In message <56f6c6b0.6010...@protected-networks.net>, Michael Butler writes:
> -current is not great for interactive use at all. The strategy of
> pre-emptively dropping idle processes to swap is hurting .. big time.

FreeBSD doesn't "preemptively" or arbitrarily push pages out to disk. LRU 
doesn't do this.

> 
> Compare inactive memory to swap in this example ..
> 
> 110 processes: 1 running, 108 sleeping, 1 zombie
> CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% idle
> Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse

To analyze this you need to capture vmstat output. You'll see the free pool 
dip below a threshold and pages go out to disk in response. If you have 
daemons with small working sets, pages that are not part of the working 
sets for daemons or applications will eventually be paged out. This is not 
a bad thing. In your example above, the 281 MB of UFS buffers are more 
active than the 917 MB paged out. If it's paged out and never used again, 
then it doesn't hurt. However the 281 MB of buffers saves you I/O. The 
inactive pages are part of your free pool that were active at one time but 
now are not. They may be reclaimed and if they are, you've just saved more 
I/O.

Top is a poor tool to analyze memory use. Vmstat is the better tool to help 
understand memory use. Inactive memory isn't a bad thing per se. Monitor 
page outs, scan rate and page reclaims.


-- 
Cheers,
Cy Schubert  or 
FreeBSD UNIX:     Web:  http://www.FreeBSD.org

The need of the many outweighs the greed of the few.





___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"


Re: CURRENT slow and shaky network stability

2016-04-02 Thread Cy Schubert
In message <56f6c6b0.6010...@protected-networks.net>, Michael Butler writes:
> -current is not great for interactive use at all. The strategy of
> pre-emptively dropping idle processes to swap is hurting .. big time.
> 
> Compare inactive memory to swap in this example ..
> 
> 110 processes: 1 running, 108 sleeping, 1 zombie
> CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% idle
> Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse
> 
>   PID USERNAME   THR PRI NICE   SIZERES STATE   C   TIMEWCPU
> COMMAND
>  1819 imb  1  280   213M 11284K select  1 147:44   5.97%
> gkrellm
> 59238 imb 43  200   980M   424M select  0  10:07   1.92%
> firefox
> 
>  .. it shouldn't start randomly swapping out processes because they're
> used infrequently when there's more than enough RAM to spare ..

Inactive memory will after time be used to "top up" the free memory pool.

> 
> It also shows up when trying to reboot .. on all of my gear, 90 seconds
> of "fail-safe" time-out is no longer enough when a good proportion of
> daemons have been dropped onto swap and must be brought back in to flush
> their data segments :-(

What does vmstat 5 display? A high scan rate is indicative of memory in short 
supply.

My laptop has 6 GB RAM of which I've allocated 2.5 GB for ARC. Top shows that 
3.6 GB 
are wired (not pagable) leaving 2.4 GB available for apps. The laptop is in the 
middle of a four thread buildworld. It's using 1.8 MB swap so far. ARC is 2560 
MB. 
UFS cache is only 49 MB at the moment but will balloon to 603 MB during 
installworld. 
When it does my swap grows to 100-150 MB. (The reason is the large 2.5 GB ARC 
*and* a 
large 603 MB UFS cache.)

Notice vmstat output below. On line 8 of my vmstat output you scan rate jump to 
10K. 
Two pages per second are paged out. This is due to the free memory pool (in 
line 7) 
dropping to 49 MB, so it freed up a few pages by paging out. Notice that page 
reclaims (re) is high at times. These pages were scheduled to be paged out but 
were 
used before they were. This indicates that my laptop is is running pretty close 
to 
the line between paging a lot and not paging at all.

slippy$ vmstat 5
 procs  memory  pagedisks faults cpu
 r b w avmfre   flt  re  pi  pofr  sr ad0 da0   in   sy   cs us sy 
id
 4 0 0   3413M   208M 12039  11   9   0 12804 170   0   0  533 7865 2722 59  3 
38
 4 0 0   3260M   376M 18550   0   0   0 27436 386   9   8  576 23029 30705 94  
6  0
 4 1 0   3432M   171M 25345   0   6   0 15340 360  10  12  530 2524 1362 97  3  0
 4 0 0   3395M   208M 20904   0   0   0 22995 395  12  11  517 5427 1142 97  3  0
 4 0 0   3695M53M 20102   0   0   0 12482 473  17  10  517 1383 1244 98  2  0
 4 0 0   3404M   371M 22996  14  10   0 39691 4557  14   8  503 8540 1813 96  3 
 1
 4 1 0   3673M49M 22398 441  22   0  6778 429  10  13  543 3034 1609 97  3  0
 4 0 0   3396M   439M 19522  26   3   2 33901 10137  11  15  545 5617 1686 97  
3  0
 4 0 0   3489M   412M 26636   0   0   0 25710 393  10  12  531 5287 1450 95  3  
2
 4 0 0   3558M   337M 23364 329  13   0 20051 410  11  15  561 6052 1702 96  3  0
 4 0 0   3492M   335M 18244   0   3   0 18550 444  14   7  512 5140 2087 98  2  0
 4 0 0   3412M   404M 21765   0   0   0 25611 388   7  12  533 7873 1394 97  3  0
 5 0 0   3604M   189M 19044   0   0   0  8404 505   7  10  644 63940 90591 93  
6  1
 4 0 0   3533M   363M 13079 423  17   0 22327 464  11   8  501 7960 4194 94  3  
3
 4 0 0   3222M   616M 20822 218  17   0 34180 294  11  13  550 5602 1850 95  4  
1
 4 0 0   3307M   542M 19639  32   3   0 15940 345  13  10  516 2589 1505 96  3  
1
 4 0 0   3320M   527M 19656   0   1   0 19191 397  14   8  514 1886 1257 97  3  0
 4 0 0   3295M   605M 21676 910  35   0 25978 356  14  12  533 3039 1490 95  4  0

Page outs is the first place to look. If no page outs, page reclaims will tell 
you 
your system may be borderline. A high scan rate says that your working set size 
is 
large enough to put some  pressure on VM. Ideally in this case I should add 
memory 
but since I'm running this close to the line (I chose my ARC maximum well) I'll 
just 
save my money instead. Also, FreeBSD is doing exactly what it should in this 
scenario.

Top is a good tool but it doesn't tell the whole picture. Run vmstat to get a 
better 
picture of your memory use.

Following this thread throughout the day (on my cellphone), I'm not convinced 
this is 
a FreeBSD O/S problem. Check your apps. What are their working set sizes. Do 
your 
apps have a small or large locality of reference? Remember, O/S tuning is a 
matter of 
robbing Peter to pay Paul. Reducing the resources used by applications will pay 
back 
bigger dividends.

Hope this helps.


-- 
Cheers,
Cy Schubert  or 
FreeBSD UNIX:     Web:  

Re: CURRENT slow and shaky network stability

2016-04-02 Thread Cy Schubert
In message <201603300728.u2u7sdwc092...@gw.catspoiler.org>, Don Lewis 
writes:
> On 29 Mar, To: ohart...@zedat.fu-berlin.de wrote:
> > On 28 Mar, Don Lewis wrote:
> >> On 28 Mar, O. Hartmann wrote:
> >  
> >> If I get a chance, I try booting my FreeBSD 11 machine with less RAM to
> >> see if that is a trigger.
> > 
> > I just tried cranking hw.physmen down to 8 GB on 11.0-CURRENT r297204,
> > GENERIC kernel.  /boot/loader.conf contains:
> >   geom_mirror_load="YES"
> >   kern.geom.label.disk_ident.enable="0"
> >   kern.geom.label.gptid.enable="0"
> >   zfs_load="YES"
> >   vboxdrv_load="YES"
> >   hw.physmem="8G"
> > 
> > /etc/sysctl.conf contains:
> >   kern.ipc.shm_allow_removed=1
> > 
> > No /etc/src.conf and nothing of that should matter in /etc/make.conf.
> > 
> > 
> > This is what I see after running
> > poudriere ports -p whatever -u
> > 
> > last pid:  2102;  load averages:  0.24,  0.52,  0.36up 0+00:06:54  14:1
> 3:51
> > 52 processes:  1 running, 51 sleeping
> > CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
> > Mem: 95M Active, 20M Inact, 1145M Wired, 39K Buf, 5580M Free
> > ARC: 595M Total, 256M MFU, 248M MRU, 16K Anon, 14M Header, 78M Other
> > Swap: 40G Total, 40G Free
> > 
> > No swap used, inactive memory low, no interactivity problems.  Next I'll
> > try r297267, which is what I believe you are running.  I scanned the
> > commit logs between r297204 and r297267 and didn't see anything terribly
> > suspicious looking.
> 
> No problems here with r297267 either.  I did a bunch of small poudriere
> runs since the system was first booted.  Usable RAM is still dialed back
> to 8 GB.  A bit of swap is in use, mostly because nginx, which has been
> unused since the system was booted, got swapped out.  Inactive memory is
> low now that poudriere is done.
> 
> last pid: 75471;  load averages:  0.21,  0.15,  0.19up 0+07:36:07  00:24:
> 00
> 50 processes:  1 running, 49 sleeping
> CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
> Mem: 5988K Active, 14M Inact, 2641M Wired, 41K Buf, 4179M Free
> ARC: 790M Total, 575M MFU, 169M MRU, 16K Anon, 9618K Header, 36M Other
> Swap: 40G Total, 50M Used, 40G Free
> 
> Do you use tmpfs?  Anything stored in there will get stashed in inactive
> memory and/or swap.

Tmpfs objects are treated as any other in memory. If the pages are recent 
enough they will be active.


-- 
Cheers,
Cy Schubert  or 
FreeBSD UNIX:     Web:  http://www.FreeBSD.org

The need of the many outweighs the greed of the few.


___
freebsd-current@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"