Re: slow machine, swap in use, but more than 5GB of RAM inactive

2017-03-07 Thread Erich Dollansky
Hi,

On Tue, 7 Mar 2017 16:46:21 -0800
Kevin Oberman  wrote:

> On Tue, Mar 7, 2017 at 3:14 PM, Erich Dollansky
>  > wrote:  
> 
> > On Tue, 7 Mar 2017 23:30:58 +1100 (EST)
> > Ian Smith  wrote:
> >  
> > > On Tue, 7 Mar 2017 10:19:35 +0800, Erich Dollansky wrote:  
> > >  >
> > >  > I wonder about the slow speed of my machine while top shows
> > >  > ample inactive memory:  
> > >
> > > ( quoting from this top output because it's neater :)
> > >  
> > >  > last pid: 85287;  load averages:  2.56,  2.44, 1.68
> > >  > up 6+10:24:45 10:13:36 191 processes: 5 running, 186 sleeping
> > >  > CPU 0: 47.1% user,  0.0% nice, 51.4% system, 0.0% interrupt,
> > >  > 1.6% idle CPU 1: 38.4% user,  0.0% nice, 60.4% system,  0.0%
> > >  > interrupt, 1.2% idle CPU 2: 38.8% user,  0.0% nice, 59.2%
> > >  > system,  0.0% interrupt, 2.0% idle CPU 3: 45.5% user,  0.0%
> > >  > nice, 51.0% system, 0.4% interrupt, 3.1% idle Mem: 677M
> > >  > Active, 5600M Inact, 1083M Wired, 178M Cache, 816M Buf,301M
> > >  > Free Swap: 16G Total, 1352M Used, 15G Free, 8% Inuse  
> > >
> > > Others have covered the swap / inactive memory issue.
> > >
> > > But I'd expect this to be slow, for any new work anyway .. there's
> > > next to no idle on any CPU.  I'd be asking, what's all of that
> > > system usage?
> > >  
> > this is building ports in the background. Still, used doing this
> > ones a month, I know the feeling when the ports are updated. This
> > one was really slow. Hopefully, it was just an unlucky coincidence.
> >
> > I rebooted meanwhile the machine. It is faster now, I would say, it
> > is back to normal now. It did not come to its limits since the new
> > start. It is now on:
> >
> > FreeBSD 10.3-STABLE #3 r314363
> >
> > Erich
> >  
> 
> Well, looks like over half of the CPU is running in system space and
> that seems rather high for what I would assume is compilation. I
> thinnk you will need to  poke around with things like systat, and the
> like to see just what the system is doing for 55% or so of all CPUs.
> Since there doe snot seem to be a lot of IO or memory at issue, the
> various command for those are probably not very interesting. Probably
> not lock stats, either.
> 
> This reminds me of when some operation (IIRC NFS related) was calling
> system time routines that are fairly expensive on FreeBSD almost
> continually.
> --

There were one or two NFS clients connected but should have been idle.
Both are on an older FreeBSD 12.

Could it be caused by a Seagate 2TB 2 1/2" HD  with
8GB SSD (ST2000LX001-1RG174)?

The system behaved strangely when the disk was new. It seemed that the
flash part was used at the beginning also for writing. It took then
some times up to a minute before data could have been read again after
prolonged write operations. The disk has now a transfer volume of some
5TB and did not show this behaviour for a few weeks. It is only hard for
me to see why this happens. I use this machine since years. The only
change was the disk. I know, things can happen.

As I have rebooted the machine, I will keep this in mind and check also
this direction when this happens again.

Thanks!

Erich
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


about that DFBSD performance test

2017-03-07 Thread Eugene M. Zheganin

Hi.

Some have probably seen this already - 
http://lists.dragonflybsd.org/pipermail/users/2017-March/313254.html


So, could anyone explain why FreeBSD was owned that much. Test is split  
into two parts, one is nginx part, and the other is the IPv4 forwarding 
part. I understand that nginx ownage was due to SO_REUSEPORT feature, 
which we do formally have, but in DFBSD and Linux it does provide a 
kernel socket multiplexor, which eliminates locking, and ours does not. 
I have only found traces of discussion that DFBSD implementation is too 
hackish. Well, hackish or not, but it's 4 times faster, as it turns out. 
The IPv4 forwarding loss is pure defeat though.


Please not that although they use HEAD it these tests, they also mention 
that this is the GENERIC-NODEBUG kernel which means this isn't related 
to the WITNESS stuff.


Please also don't consider this trolling, I'm a big FreeBSD fan through 
the years, so I'm asking because I'm kind of concerned.


Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: slow machine, swap in use, but more than 5GB of RAM inactive

2017-03-07 Thread Kevin Oberman
On Tue, Mar 7, 2017 at 3:14 PM, Erich Dollansky  wrote:

> Hi,
>
> On Tue, 7 Mar 2017 23:30:58 +1100 (EST)
> Ian Smith  wrote:
>
> > On Tue, 7 Mar 2017 10:19:35 +0800, Erich Dollansky wrote:
> >  > Hi,
> >  >
> >  > I wonder about the slow speed of my machine while top shows ample
> >  > inactive memory:
> >
> > ( quoting from this top output because it's neater :)
> >
> >  > last pid: 85287;  load averages:  2.56,  2.44, 1.68
> >  > up 6+10:24:45 10:13:36 191 processes: 5 running, 186 sleeping
> >  > CPU 0: 47.1% user,  0.0% nice, 51.4% system, 0.0% interrupt,  1.6%
> >  > idle CPU 1: 38.4% user,  0.0% nice, 60.4% system,  0.0% interrupt,
> >  > 1.2% idle CPU 2: 38.8% user,  0.0% nice, 59.2% system,  0.0%
> >  > interrupt, 2.0% idle CPU 3: 45.5% user,  0.0% nice, 51.0% system,
> >  > 0.4% interrupt, 3.1% idle Mem: 677M Active, 5600M Inact, 1083M
> >  > Wired, 178M Cache, 816M Buf,301M Free
> >  > Swap: 16G Total, 1352M Used, 15G Free, 8% Inuse
> >
> > Others have covered the swap / inactive memory issue.
> >
> > But I'd expect this to be slow, for any new work anyway .. there's
> > next to no idle on any CPU.  I'd be asking, what's all of that system
> > usage?
> >
> this is building ports in the background. Still, used doing this ones a
> month, I know the feeling when the ports are updated. This one was
> really slow. Hopefully, it was just an unlucky coincidence.
>
> I rebooted meanwhile the machine. It is faster now, I would say, it is
> back to normal now. It did not come to its limits since the new start.
> It is now on:
>
> FreeBSD 10.3-STABLE #3 r314363
>
> Erich
>

Well, looks like over half of the CPU is running in system space and that
seems rather high for what I would assume is compilation. I thinnk you will
need to  poke around with things like systat, and the like to see just what
the system is doing for 55% or so of all CPUs. Since there doe snot seem to
be a lot of IO or memory at issue, the various command for those are
probably not very interesting. Probably not lock stats, either.

This reminds me of when some operation (IIRC NFS related) was calling
system time routines that are fairly expensive on FreeBSD almost
continually.
--
Kevin Oberman, Part time kid herder and retired Network Engineer
E-mail: rkober...@gmail.com
PGP Fingerprint: D03FB98AFA78E3B78C1694B318AB39EF1B055683
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: unionfs bugs, a partial patch and some comments [Was: Re: 1-BETA3 Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @ /usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c

2017-03-07 Thread Konstantin Belousov
On Tue, Mar 07, 2017 at 10:49:01PM +, Rick Macklem wrote:
> Hmm, this is going to sound dumb, but I don't recall generating any
> unionfs patch;-)
> I'll go look for it. Maybe it was Kostik's?
I did not touched unionfs, and have no plans to.  It is equally broken in
all relevant versions of FreeBSD.

> 
> rick
> 
> From: Harry Schmalzbauer 
> Sent: Tuesday, March 7, 2017 2:45:40 PM
> To: Rick Macklem
> Cc: Konstantin Belousov; FreeBSD Stable; Mark Johnston; k...@freebsd.org
> Subject: Re: unionfs bugs, a partial patch and some comments [Was: Re: 
> 1-BETA3 Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @ 
> /usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c:1905]
> 
>  Bez?glich Harry Schmalzbauer's Nachricht vom 07.03.2017 19:44 (localtime):
> >  Bez?glich Harry Schmalzbauer's Nachricht vom 07.03.2017 13:42 (localtime):
> > ?
> >> Something ufs related seems to have tightened the unionfs locking
> >> problem in stable/11.  Now the machine instantaniously panics during
> >> boot after mounting root with Rick's latest patch.
> >>
> >> Unfortunately I don't have SWAP available on that machine (yet), but
> >> maybe shit is a hint for anybody.
> >>
> >> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
> >> 0xfe00982220e0
> >> vpanic() at vpanic+0x186/frame 0xfe0098222160
> >> kassert_panic() at kassert_panic+0x126/frame 0xfe00982221d0
> >> witness_assert() at witness_assert+0x35a/frame 0xfe009830
> >> __lockmgr_args() at __lockmgr_args+0x517/frame 0xfe0098d0
> >> vop_stdunlock() at vop_stdunlock+0x3b/frame 0xfe0098f0
> >> VOP_UNLOCK_APV() at VOP_UNLOCK_APV+0xe0/frame 0xfe0098222320
> >> unionfs_unlock() at unionfs_unlock+0x112/frame 0xfe0098222390
> >> VOP_UNLOCK_APV() at VOP_UNLOCK_APV+0xe0/frame 0xfe00982223c0
> >> unionfs_nodeget() at unionfs_nodeget+0x3ef/frame 0xfe0098222470
> >> unionfs_domount() at unionfs_domount+0x518/frame 0xfe00982226b0
> >> vfs_donmount() at vfs_donmount+0xe37/frame 0xfe00982228f0
> >> sys_nmount() at sys_nmount+0x72/frame 0xfe0098222930
> >> amd64_syscall() at amd64_syscall+0x2f9/frame 0xfe0098222ab0
> >> Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfe0098222ab0
> >> --- syscall (378, FreeBSD ELF64, sys_nmount), rip = 0x80086ecea, rsp =
> >> 0x7fffe318, rbp = 0x7fffeca0 ---
> > New discovery:
> > Rick's latest patch casues panic only with KDB. If I compile a kernel
> > without witenss and KDB, the machine boots fine!
> > Also, it's at least not so easy anymore to trigger the deadlock :-) . I
> > need to do more testing but until now Rick's approach seems very
> > promising :-) .
> 
> My unionfs deadlock problem isn't really solved with Rick's latest
> patch, I still can reproduce it: krb5.conf and krb5.keytab are files on
> unionfs referenced by /etc.  libexec/negotiate_kerberos_auth reads these
> and if I have enough helper processes handling requests, the deadlock
> occurs.
> 
> _But_: If I move the files outside the unionfs and create a symlink, I
> cannot reproduce the deadlock anymore, which was similar easily
> reproducable without it or any of the other workarounds.
> So it looks like I have an acceptable solution for now, although it's
> only usable under certain conditions.
> 
> Unfortunately I can't do tests with a debug kernel since the patch
> prevents the system with the debug kernel from starting up.
> But if this was ironed out, I'd happily provide more info.
> 
> 
> Thanks,
> 
> -Harry
> 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: slow machine, swap in use, but more than 5GB of RAM inactive

2017-03-07 Thread Erich Dollansky
Hi,

On Tue, 7 Mar 2017 23:30:58 +1100 (EST)
Ian Smith  wrote:

> On Tue, 7 Mar 2017 10:19:35 +0800, Erich Dollansky wrote:
>  > Hi,
>  > 
>  > I wonder about the slow speed of my machine while top shows ample
>  > inactive memory:  
> 
> ( quoting from this top output because it's neater :)
> 
>  > last pid: 85287;  load averages:  2.56,  2.44, 1.68 
>  > up 6+10:24:45 10:13:36 191 processes: 5 running, 186 sleeping 
>  > CPU 0: 47.1% user,  0.0% nice, 51.4% system, 0.0% interrupt,  1.6%
>  > idle CPU 1: 38.4% user,  0.0% nice, 60.4% system,  0.0% interrupt,
>  > 1.2% idle CPU 2: 38.8% user,  0.0% nice, 59.2% system,  0.0%
>  > interrupt, 2.0% idle CPU 3: 45.5% user,  0.0% nice, 51.0% system,
>  > 0.4% interrupt, 3.1% idle Mem: 677M Active, 5600M Inact, 1083M
>  > Wired, 178M Cache, 816M Buf,301M Free 
>  > Swap: 16G Total, 1352M Used, 15G Free, 8% Inuse  
> 
> Others have covered the swap / inactive memory issue.
> 
> But I'd expect this to be slow, for any new work anyway .. there's
> next to no idle on any CPU.  I'd be asking, what's all of that system
> usage?
> 
this is building ports in the background. Still, used doing this ones a
month, I know the feeling when the ports are updated. This one was
really slow. Hopefully, it was just an unlucky coincidence.

I rebooted meanwhile the machine. It is faster now, I would say, it is
back to normal now. It did not come to its limits since the new start.
It is now on:

FreeBSD 10.3-STABLE #3 r314363

Erich
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: unionfs bugs, a partial patch and some comments [Was: Re: 1-BETA3 Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @ /usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c

2017-03-07 Thread Rick Macklem
Hmm, this is going to sound dumb, but I don't recall generating any
unionfs patch;-)
I'll go look for it. Maybe it was Kostik's?

rick

From: Harry Schmalzbauer 
Sent: Tuesday, March 7, 2017 2:45:40 PM
To: Rick Macklem
Cc: Konstantin Belousov; FreeBSD Stable; Mark Johnston; k...@freebsd.org
Subject: Re: unionfs bugs, a partial patch and some comments [Was: Re: 1-BETA3 
Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @ 
/usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c:1905]

 Bezüglich Harry Schmalzbauer's Nachricht vom 07.03.2017 19:44 (localtime):
>  Bezüglich Harry Schmalzbauer's Nachricht vom 07.03.2017 13:42 (localtime):
> …
>> Something ufs related seems to have tightened the unionfs locking
>> problem in stable/11.  Now the machine instantaniously panics during
>> boot after mounting root with Rick's latest patch.
>>
>> Unfortunately I don't have SWAP available on that machine (yet), but
>> maybe shit is a hint for anybody.
>>
>> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
>> 0xfe00982220e0
>> vpanic() at vpanic+0x186/frame 0xfe0098222160
>> kassert_panic() at kassert_panic+0x126/frame 0xfe00982221d0
>> witness_assert() at witness_assert+0x35a/frame 0xfe009830
>> __lockmgr_args() at __lockmgr_args+0x517/frame 0xfe0098d0
>> vop_stdunlock() at vop_stdunlock+0x3b/frame 0xfe0098f0
>> VOP_UNLOCK_APV() at VOP_UNLOCK_APV+0xe0/frame 0xfe0098222320
>> unionfs_unlock() at unionfs_unlock+0x112/frame 0xfe0098222390
>> VOP_UNLOCK_APV() at VOP_UNLOCK_APV+0xe0/frame 0xfe00982223c0
>> unionfs_nodeget() at unionfs_nodeget+0x3ef/frame 0xfe0098222470
>> unionfs_domount() at unionfs_domount+0x518/frame 0xfe00982226b0
>> vfs_donmount() at vfs_donmount+0xe37/frame 0xfe00982228f0
>> sys_nmount() at sys_nmount+0x72/frame 0xfe0098222930
>> amd64_syscall() at amd64_syscall+0x2f9/frame 0xfe0098222ab0
>> Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfe0098222ab0
>> --- syscall (378, FreeBSD ELF64, sys_nmount), rip = 0x80086ecea, rsp =
>> 0x7fffe318, rbp = 0x7fffeca0 ---
> New discovery:
> Rick's latest patch casues panic only with KDB. If I compile a kernel
> without witenss and KDB, the machine boots fine!
> Also, it's at least not so easy anymore to trigger the deadlock :-) . I
> need to do more testing but until now Rick's approach seems very
> promising :-) .

My unionfs deadlock problem isn't really solved with Rick's latest
patch, I still can reproduce it: krb5.conf and krb5.keytab are files on
unionfs referenced by /etc.  libexec/negotiate_kerberos_auth reads these
and if I have enough helper processes handling requests, the deadlock
occurs.

_But_: If I move the files outside the unionfs and create a symlink, I
cannot reproduce the deadlock anymore, which was similar easily
reproducable without it or any of the other workarounds.
So it looks like I have an acceptable solution for now, although it's
only usable under certain conditions.

Unfortunately I can't do tests with a debug kernel since the patch
prevents the system with the debug kernel from starting up.
But if this was ironed out, I'd happily provide more info.


Thanks,

-Harry

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: unionfs bugs, a partial patch and some comments [Was: Re: 1-BETA3 Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @ /usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c

2017-03-07 Thread Harry Schmalzbauer
 Bezüglich Harry Schmalzbauer's Nachricht vom 07.03.2017 19:44 (localtime):
>  Bezüglich Harry Schmalzbauer's Nachricht vom 07.03.2017 13:42 (localtime):
> …
>> Something ufs related seems to have tightened the unionfs locking
>> problem in stable/11.  Now the machine instantaniously panics during
>> boot after mounting root with Rick's latest patch.
>>
>> Unfortunately I don't have SWAP available on that machine (yet), but
>> maybe shit is a hint for anybody.
>>
>> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
>> 0xfe00982220e0
>> vpanic() at vpanic+0x186/frame 0xfe0098222160
>> kassert_panic() at kassert_panic+0x126/frame 0xfe00982221d0
>> witness_assert() at witness_assert+0x35a/frame 0xfe009830
>> __lockmgr_args() at __lockmgr_args+0x517/frame 0xfe0098d0
>> vop_stdunlock() at vop_stdunlock+0x3b/frame 0xfe0098f0
>> VOP_UNLOCK_APV() at VOP_UNLOCK_APV+0xe0/frame 0xfe0098222320
>> unionfs_unlock() at unionfs_unlock+0x112/frame 0xfe0098222390
>> VOP_UNLOCK_APV() at VOP_UNLOCK_APV+0xe0/frame 0xfe00982223c0
>> unionfs_nodeget() at unionfs_nodeget+0x3ef/frame 0xfe0098222470
>> unionfs_domount() at unionfs_domount+0x518/frame 0xfe00982226b0
>> vfs_donmount() at vfs_donmount+0xe37/frame 0xfe00982228f0
>> sys_nmount() at sys_nmount+0x72/frame 0xfe0098222930
>> amd64_syscall() at amd64_syscall+0x2f9/frame 0xfe0098222ab0
>> Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfe0098222ab0
>> --- syscall (378, FreeBSD ELF64, sys_nmount), rip = 0x80086ecea, rsp =
>> 0x7fffe318, rbp = 0x7fffeca0 ---
> New discovery:
> Rick's latest patch casues panic only with KDB. If I compile a kernel
> without witenss and KDB, the machine boots fine!
> Also, it's at least not so easy anymore to trigger the deadlock :-) . I
> need to do more testing but until now Rick's approach seems very
> promising :-) . 

My unionfs deadlock problem isn't really solved with Rick's latest
patch, I still can reproduce it: krb5.conf and krb5.keytab are files on
unionfs referenced by /etc.  libexec/negotiate_kerberos_auth reads these
and if I have enough helper processes handling requests, the deadlock
occurs.

_But_: If I move the files outside the unionfs and create a symlink, I
cannot reproduce the deadlock anymore, which was similar easily
reproducable without it or any of the other workarounds.
So it looks like I have an acceptable solution for now, although it's
only usable under certain conditions.

Unfortunately I can't do tests with a debug kernel since the patch
prevents the system with the debug kernel from starting up.
But if this was ironed out, I'd happily provide more info.


Thanks,

-Harry

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: unionfs bugs, a partial patch and some comments [Was: Re: 1-BETA3 Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @ /usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c

2017-03-07 Thread Harry Schmalzbauer
 Bezüglich Harry Schmalzbauer's Nachricht vom 07.03.2017 13:42 (localtime):
…
> Something ufs related seems to have tightened the unionfs locking
> problem in stable/11.  Now the machine instantaniously panics during
> boot after mounting root with Rick's latest patch.
>
> Unfortunately I don't have SWAP available on that machine (yet), but
> maybe shit is a hint for anybody.
>
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
> 0xfe00982220e0
> vpanic() at vpanic+0x186/frame 0xfe0098222160
> kassert_panic() at kassert_panic+0x126/frame 0xfe00982221d0
> witness_assert() at witness_assert+0x35a/frame 0xfe009830
> __lockmgr_args() at __lockmgr_args+0x517/frame 0xfe0098d0
> vop_stdunlock() at vop_stdunlock+0x3b/frame 0xfe0098f0
> VOP_UNLOCK_APV() at VOP_UNLOCK_APV+0xe0/frame 0xfe0098222320
> unionfs_unlock() at unionfs_unlock+0x112/frame 0xfe0098222390
> VOP_UNLOCK_APV() at VOP_UNLOCK_APV+0xe0/frame 0xfe00982223c0
> unionfs_nodeget() at unionfs_nodeget+0x3ef/frame 0xfe0098222470
> unionfs_domount() at unionfs_domount+0x518/frame 0xfe00982226b0
> vfs_donmount() at vfs_donmount+0xe37/frame 0xfe00982228f0
> sys_nmount() at sys_nmount+0x72/frame 0xfe0098222930
> amd64_syscall() at amd64_syscall+0x2f9/frame 0xfe0098222ab0
> Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfe0098222ab0
> --- syscall (378, FreeBSD ELF64, sys_nmount), rip = 0x80086ecea, rsp =
> 0x7fffe318, rbp = 0x7fffeca0 ---

New discovery:
Rick's latest patch casues panic only with KDB. If I compile a kernel
without witenss and KDB, the machine boots fine!
Also, it's at least not so easy anymore to trigger the deadlock :-) . I
need to do more testing but until now Rick's approach seems very
promising :-) . Unfortunately I can't provide a fix or suggestion to why
the KDB kernel panics and the non-KDB doesn't, just the dull imagination
it could be that additional locking checks (KASSERT?), preventing more
damage, are not in place. So I guess I'm in danger waters, but it
defenitly is a highly appreciated improvement for me and my bery best
bet for now (neither eliminating unionfs nor holding off 11 updates were
real options for me, especially because unionfs isn't really well
wokring on 10.3 either, just not leading to deadlocks in more environments)!

I tried the non-debug kernel because I browsed old unionfs discussions
and desperately gave Attilio Rao's patch a try since I couldn't remember
why I haven't kept it locally:
https://people.freebsd.org/~attilio/unionfs_nodeget4.patch (he tried to
solve unionfs problems for RELENG_9 back in 2012:
https://lists.freebsd.org/pipermail/freebsd-stable/2012-November/070358.html)

It's still true that his patch leads to a panic with debugging kernel –
only. Same patch without KDB allows to boot and start squid. But the
result is the same as with plain r314856, the system deadlocks reproducibly.

Also, the trace with his patch looks identical to the plain r314856
unionfs panic.

So I hope Rick or someone else can pick up the latest patch and polish
it to make KDB-kernels happy :-)
I can offer a small donation if that helps!
Of course, I'll also provide KDB info if needed/helpful.

thanks,

-harry
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Wine Enthusiasts List

2017-03-07 Thread Antonia Riddle


Hi,

Hope this email finds you well!

Would you be interested in acquiring an email list of "Wine Enthusiasts List" 
from USA?

We also have data for:

1. Beer Enthusiasts List
2. Liquor Enthusiasts List
3. Beverage Enthusiasts List
4. Food Enthusiasts List
5. Entertainment Enthusiasts List and many more...

Each record in the list contains Contact (First and Last Name), Mailing 
Address, List type and Opt-in email address.

All the contacts are opt-in verified, 100 percent permission based and can be 
used for unlimited multi-channel marketing.

Appreciate your quick response and thoughts.

Best Wishes,
Antonia Riddle
Marketing Manager


---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: unionfs bugs, a partial patch and some comments [Was: Re: 1-BETA3 Panic: __lockmgr_args: downgrade a recursed lockmgr nfs @ /usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c

2017-03-07 Thread Harry Schmalzbauer
Bezüglich Rick Macklem's Nachricht vom 05.09.2016 23:21 (localtime):
> Harry Schmalzbauer  wrote:
>>Bezüglich Rick Macklem's Nachricht vom 18.08.2016 02:03 (localtime):
>>>  Kostik wrote:
>>> [stuff snipped]
 insmnque() performs the cleanup on its own, and that default cleanup
> isnot suitable >for the situation.  I think that insmntque1() would
> betterfit your requirements, your >need to move the common code into a
> helper.It seems that >unionfs_ins_cached_vnode() cleanup could reuse it.
>>> 
>>> I've attached an updated patch (untested like the last one). This one
> creates a
>>> custom version insmntque_stddtr() that first calls unionfs_noderem()
> and then
>>> does the same stuff as insmntque_stddtr(). This looks like it does the
> required
>>> stuff (unionfs_noderem() is what the unionfs VOP_RECLAIM() does).
>>> It switches the node back to using its own v_vnlock that is
> exclusively locked,
>>> among other things.
>>
>>Thanks a lot, today I gave it a try.
>>
>>With this patch, one reproducable panic can still be easily triggered:
>>I have directory A unionfs_mounted under directory B.
>>Then I mount_unionfs the same directory A below another directory C.
>>panic: __lockmgr_args: downgrade a recursed lockmgr nfs @
>>/usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c:1905
>>Result is this backtrace, hardly helpful I guess:
>>
>>#1  0x80ae5fd9 in kern_reboot (howto=260) at
>>/usr/local/share/deploy-tools/RELENG_11/src/sys/kern/kern_shutdown.c:366
>>#2  0x80ae658b in vpanic (fmt=, ap=>optimized out>)
>>at
>>/usr/local/share/deploy-tools/RELENG_11/src/sys/kern/kern_shutdown.c:759
>>#3  0x80ae63c3 in panic (fmt=0x0) at
>>/usr/local/share/deploy-tools/RELENG_11/src/sys/kern/kern_shutdown.c:690
>>#4  0x80ab7ab7 in __lockmgr_args (lk=,
>>flags=, ilk=, wmesg=>optimized out>,
>>pri=, timo=, file=>optimized out>, line=)
>>  >   at
> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/kern_lock.c:992
>>#5  0x80ba510c in vop_stdlock (ap=) at
>>lockmgr.h:98
>>#6  0x8111932d in VOP_LOCK1_APV (vop=,
>>a=) at vnode_if.c:2087
>>#7  0x80a18cfc in unionfs_lock (ap=0xfe007a3ba6a0) at
>>vnode_if.h:859
>>#8  0x8111932d in VOP_LOCK1_APV (vop=,
>>a=) at vnode_if.c:2087
>>#9  0x80bc9b93 in _vn_lock (vp=,
>>flags=66560, file=, line=) at
>>vnode_if.h:859
>>#10 0x80a18460 in unionfs_readdir (ap=) at
>>/usr/local/share/deploy-tools/RELENG_11/src/sys/fs/unionfs/union_vnops.c:1531
>>#11 0x81118ecf in VOP_READDIR_APV (vop=,
>>a=) at vnode_if.c:1822
>>#12 0x80bc6e3b in kern_getdirentries (td=,
>>fd=, buf=0x800c3d000 >bounds>,
>>count=, basep=0xfe007a3ba980, residp=0x0)
>>at vnode_if.h:758
>>#13 0x80bc6bf8 in sys_getdirentries (td=0x0,
>>uap=0xfe007a3baa40) at
>>/usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_syscalls.c:3940
>>#14 0x80fad6b8 in amd64_syscall (td=,
>>traced=0) at subr_syscall.c:135
>>#15 0x80f8feab in Xfast_syscall () at
>>/usr/local/share/deploy-tools/RELENG_11/src/sys/amd64/amd64/exception.S:396
>>#16 0x00452eea in ?? ()
>>Previous frame inner to this frame (corrupt stack?
> Ok, I finally got around to looking at this and the panic() looks like a
> pretty straightforward
> bug in the unionfs code.
> - In unionfs_readdir(), it does a vn_lock(..LK_UPGRADE) and then later
> in the code
>   vn_lock(..LK_DOWNGRADE) if it did the upgrade. (At line#1531 as noted
> in the backtrace.)
>   - In unionfs_lock(), it sets LK_CANRECURSE when it is the rootvp and
> LK_EXCLUSIVE.
>(So it allows recursive acquisition in this case.)
> --> Then it would call vn_lock(..LK_DOWNGRADE), which would panic if it
> has recursed.
> 
> Now, I'll admit unionfs_lock() is too obscure for me to understand, but...
> Is it necessary to vn_lock(..LK_DOWNGRADE) or can unionfs_readdir() just
> return
> with the vnode exclusively locked?
> (It would be easy to change the code to avoid the
> vn_lock(..LK_DOWNGRADE) call
>  when it has done the vn_lock(..LK_EXCLUSIVE) after
> vn_lock(..LK_UPGRADE) fails.)
> 
> rick
> 
>>I ran your previous patch with for some time.
>>Similarly, mounting one directory below a 2nd mountpount crashed the
>>machine (forgot to config dumpdir, so can't compare backtrace with the
>>current patch).
>>Otherwise, at least with the previous patch, I haven't had any other
>>panic for about one week.

Something ufs related seems to have tightened the unionfs locking
problem in stable/11.  Now the machine instantaniously panics during
boot after mounting root with Rick's latest patch.

Unfortunately I don't have SWAP available on that machine (yet), but
maybe shit is a hint for anybody.

db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame
0xfe00982220e0
vpanic() at vpanic+0x186/frame 0xfe0098222160
kassert_panic() at kassert_panic+0x126/frame 0xfe00982221d0

Re: slow machine, swap in use, but more than 5GB of RAM inactive

2017-03-07 Thread Ian Smith
On Tue, 7 Mar 2017 10:19:35 +0800, Erich Dollansky wrote:
 > Hi,
 > 
 > I wonder about the slow speed of my machine while top shows ample
 > inactive memory:

( quoting from this top output because it's neater :)

 > last pid: 85287;  load averages:  2.56,  2.44, 1.68 
 > up 6+10:24:45 10:13:36 191 processes: 5 running, 186 sleeping 
 > CPU 0: 47.1% user,  0.0% nice, 51.4% system, 0.0% interrupt,  1.6% idle 
 > CPU 1: 38.4% user,  0.0% nice, 60.4% system,  0.0% interrupt, 1.2% idle 
 > CPU 2: 38.8% user,  0.0% nice, 59.2% system,  0.0% interrupt, 2.0% idle 
 > CPU 3: 45.5% user,  0.0% nice, 51.0% system,  0.4% interrupt, 3.1% idle 
 > Mem: 677M Active, 5600M Inact, 1083M Wired, 178M Cache, 816M Buf,301M
 > Free 
 > Swap: 16G Total, 1352M Used, 15G Free, 8% Inuse

Others have covered the swap / inactive memory issue.

But I'd expect this to be slow, for any new work anyway .. there's next 
to no idle on any CPU.  I'd be asking, what's all of that system usage?

cheers, Ian
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: slow machine, swap in use, but more than 5GB of RAM inactive

2017-03-07 Thread Erich Dollansky
Hi,

On Tue, 7 Mar 2017 10:02:42 +0300
Slawa Olhovchenkov  wrote:

> On Tue, Mar 07, 2017 at 10:19:35AM +0800, Erich Dollansky wrote:
> 
> > I wonder about the slow speed of my machine while top shows ample
> > inactive memory:
> > 
> > last pid: 85287;  load averages:  2.56,  2.44, 1.68 
> > up 6+10:24:45 10:13:36 191 processes: 5 running, 186 sleeping 
> > CPU 0: 47.1% user,  0.0% nice, 51.4% system, 0.0% interrupt,  1.6%
> > idle CPU 1: 38.4% user,  0.0% nice, 60.4% system,  0.0% interrupt,
> > 1.2% idle CPU 2: 38.8% user,  0.0% nice, 59.2% system,  0.0%
> > interrupt, 2.0% idle CPU 3: 45.5% user,  0.0% nice, 51.0% system,
> > 0.4% interrupt, 3.1% idle Mem: 677M Active, 5600M Inact, 1083M
> > Wired, 178M Cache, 816M Buf,301M Free 
> > Swap: 16G Total, 1352M Used, 15G Free, 8% Inuse
> > 
> > The swap space in use can be explained by large compilations done
> > recently. Why is the inactive memory not put to use.
> > 
> > I do not want to restart the machine. So, if I could help find the
> > source of the problem, I would do.  
> 
> inactive is not 'not used' memory.
> this is just pages don't touched in last 10(?) seconds, but all of
> this allocated (such as malloc, mmap, sendfile) to application
> (userland programs).

something changed then since my last update of FreeBSD. The machine is
currently on 10.3-STABLE FreeBSD 10.3-STABLE #2 r313849: Fri Feb 17. It
is on 10 since 2012. It was never this slow. Anyway, I will reboot it
soon into a new kernel and see what will happen then.

Thanks!

Erich
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 'show alllocks' of completely locked machine [Was: Re: Complete IO lockup, state "ufs" from userland, debuging help wanted]

2017-03-07 Thread Harry Schmalzbauer
Bezüglich hiren panchasara's Nachricht vom 06.03.2017 21:10 (localtime):
> On 03/06/17 at 08:56P, Harry Schmalzbauer wrote:
>>  Bez?glich Harry Schmalzbauer's Nachricht vom 05.03.2017 22:59 (localtime):
>>>  Hello,
>>>
>>> I can easily lock up FreeBSD stable/11 from userland. Not that I want to...
>>> I'm running squid, which starts an authentication helper
>>> "*negotiate_kerberos_auth*", which seems to be the culprit.
>>> Completely all IO is blocked, there's no way to get anything from any
>>> filesystem.
>>> All non IO-requesting processes(threads) run well, including sshd and
>>> shells.
>>> There's no load (neither cpu nor io) just any process requesting io
>>> stucks in state "ufs"
>>>
>>> Can anyone help me finding out what's going wrong?
>>> Serial console is available.
>>
>> Dear hackers,
>>
>> I managed to get into DDB, but I'm lost from there?
>>
>> What information could be usefull to find out the cause of this complete
>> lockup?
>>
>> I'd need someone who could guide me through ? I'd pay for a debuging
>> lesson! (quiet constrained budget though)
>>
>> This happens when the machine got stuck:
>>
>> intr_event_handle() at intr_event_handle+0x9c/frame 0xfe0093dcb7d0
>> intr_execute_handlers() at intr_execute_handlers+0x48/frame
>> 0xfe0093dcb800
>> lapic_handle_intr() at lapic_handle_intr+0x68/frame 0xfe0093dcb840
>> Xapic_isr1() at Xapic_isr1+0xb7/frame 0xfe0093dcb840
>> --- interrupt, rip = 0x807b9bd6, rsp = 0xfe0093dcb910, rbp =
>> 0xfe0093dcb910 ---
>> acpi_cpu_c1() at acpi_cpu_c1+0x6/frame 0xfe0093dcb910
>> acpi_cpu_idle() at acpi_cpu_idle+0x2ea/frame 0xfe0093dcb960
>> cpu_idle_acpi() at cpu_idle_acpi+0x3f/frame 0xfe0093dcb980
>> cpu_idle() at cpu_idle+0x8f/frame 0xfe0093dcb9a0
>> sched_idletd() at sched_idletd+0x436/frame 0xfe0093dcba70
>> fork_exit() at fork_exit+0x84/frame 0xfe0093dcbab0
>> fork_trampoline() at fork_trampoline+0xe/frame 0xfe0093dcbab0
>> --- trap 0, rip = 0, rsp = 0, rbp = 0 ---
>>
>>
>> db> show alllocks
>> Process 1259 (negotiate_kerberos_) thread 0xf80005ddea00 (100096)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1258 (negotiate_kerberos_) thread 0xf80005ddc500 (100252)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1257 (negotiate_kerberos_) thread 0xf80005ddda00 (100247)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1256 (negotiate_kerberos_) thread 0xf80065612500 (100261)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1255 (negotiate_kerberos_) thread 0xf80065612a00 (100260)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1254 (negotiate_kerberos_) thread 0xf80065613000 (100257)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1253 (negotiate_kerberos_) thread 0xf80065614000 (100254)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1252 (negotiate_kerberos_) thread 0xf800651e1000 (100246)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1251 (negotiate_kerberos_) thread 0xf80005ddca00 (100251)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1250 (negotiate_kerberos_) thread 0xf800651e2a00 (100241)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1251 (negotiate_kerberos_) thread 0xf80005ddca00 (100251)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1250 (negotiate_kerberos_) thread 0xf800651e2a00 (100241)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1247 (sqtop) thread 0xf80065650a00 (100259)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1184 (systat) thread 0xf80065613a00 (100255)
>> shared lockmgr ufs (ufs) r = 0 (0xf8000523d5f0) locked @
>> /usr/local/share/deploy-tools/RELENG_11/src/sys/kern/vfs_lookup.c:611
>> Process 1042 (negotiate_kerberos_) thread 

Re: slow machine, swap in use, but more than 5GB of RAM inactive

2017-03-07 Thread Brandon Allbery
On Tue, Mar 7, 2017 at 2:02 AM, Slawa Olhovchenkov  wrote:

> inactive is not 'not used' memory.
> this is just pages don't touched in last 10(?) seconds, but all of
> this allocated (such as malloc, mmap, sendfile) to application
> (userland programs).
>

Or otherwise phrased: they're candidates to be paged out if something
requires that much memory soon. Meanwhile, stuff currently paged out will
stay there unless actively needed; why bother pulling it back in if nothing
actually needs it right now, especially since it got paged out because
nothing had used it recently (i.e. it was marked inactive)?

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"