I'm a little confused.  What state is the vnlru kernel thread in?  It
    sounds like vnlru must be stuck.

    Note that you can gdb the live kernel and get a stack backtrace of any
    stuck process.

    gdb -k /kernel.debug /dev/mem       (or whatever)
    proc N                              (e.g. vnlru's pid)
    back

    All the processes stuck in 'inode' are likely associated with the 
    problem, but if that is what is causing vnlru to be stuck I would expect
    vnlru itself to be stuck in 'inode'.

    unionfs is probably responsible.  I would not be surprised at all if 
    unionfs is causing a deadlock somewhere which is creating a chain of
    processes stuck in 'inode' which is in turn causing vnlru to get stuck.

                                        -Matt
                                        Matthew Dillon 
                                        <[EMAIL PROTECTED]>

:
:On Mon, 26 May 2003, Mike Harding wrote:
:
:> Er - are any changes made to RELENG_4_8 that aren't made to RELENG_4?  I
:> thought it was the other way around - that 4_8 only got _some_ of the
:> changes to RELENG_4...
:
:Ack, my fault ... sorry, wasn't thinking :(  RELENG_4 is correct ... I
:should have confirmed my settings before blathering on ...
:
:One of the scripts I used extensively while debugging this ... a quite
:simple one .. was:
:
:#!/bin/tcsh
:while ( 1 )
:  echo `sysctl debug.numvnodes` - `sysctl debug.freevnodes` - `sysctl 
debug.vnlru_nowhere` - `ps auxl | grep vnlru | grep -v grep | awk '{print $20}'`
:  sleep 10
:end
:
:which outputs this:
:
:debug.numvnodes: 463421 - debug.freevnodes: 220349 - debug.vnlru_nowhere: 3 - vlruwt
:
:I have my maxvnodes set to 512k right now ... now, when the server "hung",
:the output would look something like (this would be with 'default' vnodes):
:
:debug.numvnodes: 199252 - debug.freevnodes: 23 - debug.vnlru_nowhere: 12 - vlrup
:
:with the critical bit being the vlruwt -> vlrup change ...
:
:with unionfs, you are using two vnodes per file, instead of one in
:non-union mode, which is why I went to 512k vs the default of ~256k vnodes
:... it doesn't *fix* the problem, it only reduces its occurance ...
:_______________________________________________
:[EMAIL PROTECTED] mailing list
:http://lists.freebsd.org/mailman/listinfo/freebsd-stable
:To unsubscribe, send any mail to "[EMAIL PROTECTED]"
:

_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to