On Wed, Sep 21, 2016 at 09:11:24AM +0200, Julien Charbon wrote:

> 
>  Hi Slawa,
> 
> On 9/20/16 10:26 PM, Slawa Olhovchenkov wrote:
> > On Tue, Sep 20, 2016 at 10:00:25PM +0200, Julien Charbon wrote:
> >> On 9/19/16 10:43 PM, Slawa Olhovchenkov wrote:
> >>> On Mon, Sep 19, 2016 at 10:32:13PM +0200, Julien Charbon wrote:
> >>>>
> >>>>> @ CPU_CLK_UNHALTED_CORE [4653445 samples]
> >>>>>
> >>>>> 51.86%  [2413083]  lock_delay @ /boot/kernel.VSTREAM/kernel
> >>>>>  100.0%  [2413083]   __rw_wlock_hard
> >>>>>   100.0%  [2413083]    tcp_tw_2msl_scan
> >>>>>    99.99%  [2412958]     pfslowtimo
> >>>>>     100.0%  [2412958]      softclock_call_cc
> >>>>>      100.0%  [2412958]       softclock
> >>>>>       100.0%  [2412958]        intr_event_execute_handlers
> >>>>>        100.0%  [2412958]         ithread_loop
> >>>>>         100.0%  [2412958]          fork_exit
> >>>>>    00.01%  [125]         tcp_twstart
> >>>>>     100.0%  [125]          tcp_do_segment
> >>>>>      100.0%  [125]           tcp_input
> >>>>>       100.0%  [125]            ip_input
> >>>>>        100.0%  [125]             swi_net
> >>>>>         100.0%  [125]              intr_event_execute_handlers
> >>>>>          100.0%  [125]               ithread_loop
> >>>>>           100.0%  [125]                fork_exit
> >>>>
> >>>>  The only write lock tcp_tw_2msl_scan() tries to get is a
> >>>> INP_WLOCK(inp).  Thus here, tcp_tw_2msl_scan() seems to be stuck
> >>>> spinning on INP_WLOCK (or pfslowtimo() is going crazy and calls
> >>>> tcp_tw_2msl_scan() at high rate but this will be quite unexpected).
> >>>>
> >>>>  Thus my hypothesis is that something is holding the INP_WLOCK and not
> >>>> releasing it, and tcp_tw_2msl_scan() is spinning on it.
> >>>>
> >>>>  If you can, could you compile the kernel with below options:
> >>>>
> >>>> options        DDB                     # Support DDB.
> >>>> options        DEADLKRES               # Enable the deadlock resolver
> >>>> options        INVARIANTS              # Enable calls of extra sanity
> >>>> checking
> >>>> options        INVARIANT_SUPPORT       # Extra sanity checks of internal
> >>>> structures, required by INVARIANTS
> >>>> options        WITNESS                 # Enable checks to detect
> >>>> deadlocks and cycles
> >>>> options        WITNESS_SKIPSPIN        # Don't run witness on spinlocks
> >>>> for speed
> >>>
> >>> Currently this host run with 100% CPU load (on all cores), i.e.
> >>> enabling WITNESS will be significant drop performance.
> >>> Can I use only some subset of options?
> >>>
> >>> Also, I can some troubles to DDB enter in this case.
> >>> May be kgdb will be success (not tryed yet)?
> >>
> >>  If these kernel options will certainly slow down your kernel, they also
> >> might found the root cause of your issue before reaching the point where
> >> you have 100% cpu load on all cores (thanks to INVARIANTS).  I would
> >> suggest:
> > 
> > Hmmm, may be I am not clarified.
> > This host run at peak hours with 100% CPU load as normal operation,
> > this is for servering 2x10G, this is CPU load not result of lock
> > issuse, this is not us case. And this is because I am fear to enable
> > WITNESS -- I am fear drop performance.
> > 
> > This lock issuse happen irregulary and may be caused by other issuse
> > (nginx crashed). In this case about 1/3 cores have 100% cpu load,
> > perhaps by this lock -- I am can trace only from one core and need
> > more then hour for this (may be on other cores different trace, I
> > can't guaranted anything).
> 
>  I see, especially if you are running in production WITNESS might indeed
> be not practical for you.  In this case, I would suggest before doing
> WITNESS and still get more information to:
> 
>  #0: Do a lock profiling:
> 
> https://www.freebsd.org/cgi/man.cgi?query=LOCK_PROFILING
> 
> options LOCK_PROFILING
> 
>  Example of usage:
> 
> # Run
> $ sudo sysctl debug.lock.prof.enable=1
> $ sleep 10
> $ sudo sysctl debug.lock.prof.enable=0
> 
> # Get results
> $ sysctl debug.lock.prof.stats | head -2; sysctl debug.lock.prof.stats |
> sort -n -k 4 -r

OK, but in case of leak lock (why inp lock too long for
tcp_tw_2msl_scan?) I can't see cause of this lock running this
commands after stuck happen?

>  You can also use Dtrace and lockstat (especially with the lockstat -s
> option):
> 
> https://wiki.freebsd.org/DTrace/One-Liners#Kernel_Locks
> https://www.freebsd.org/cgi/man.cgi?query=lockstat&manpath=FreeBSD+11.0-RELEASE
> 
>  But I am less familiar with Dtrace/lockstat tools.

OK, interesting too. Thanks.

> >>  #1. Try above kernel options at least once, and see what you can get.
> > 
> > OK, I am try this after some time.
> > 
> >>  #2. If #1 is a total failure try below patch:  It won't solve anything,
> >> it just makes tcp_tw_2msl_scan() less greedy when there is contention on
> >> the INP write lock.  If it makes the debugging more feasible, continue
> >> to #3.
> > 
> > OK, thanks.
> > What purpose to not skip locked tcptw in this loop?
> 
>  If I understand your question correctly:  According to your pmcstat
> result, tcp_tw_2msl_scan() currently struggles with a write lock
> (__rw_wlock_hard) and the only write lock used tcp_tw_2msl_scan() is
> INP_WLOCK.  No sign of contention on TW_RLOCK(V_tw_lock) currently.

As I see in code, tcp_tw_2msl_scan got first node from V_twq_2msl and
need got RW lock on inp w/o alternates. Can tcp_tw_2msl_scan skip current node
and go to next node in V_twq_2msl list if current node locked by some
reasson?

> 51.86%  [2413083]  lock_delay @ /boot/kernel.VSTREAM/kernel
>  100.0%  [2413083]   __rw_wlock_hard
>   100.0%  [2413083]    tcp_tw_2msl_scan
> 
> --
> Julien
_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to