On Tue, Aug 12, 2014 at 11:03:21AM +0530, Amit Shah wrote:
> On (Mon) 11 Aug 2014 [20:45:31], Paul E. McKenney wrote:

[ . . . ]

> > > That is a bit surprising.  Is it possible that the system is OOMing
> > > quickly due to grace periods not proceeding?  If so, maybe giving the
> > > VM more memory would help.
> > 
> > Oh, and it is necessary to build the kernel with CONFIG_RCU_TRACE=y
> > for the rcu_nocb_wake trace events to be enabled in the first place.
> > I am assuming that your kernel was built with CONFIG_MAGIC_SYSRQ=y.
> 
> Yes, it is :-)  I checked the rcu_nocb_poll cmdline option does indeed
> dump all the ftrace buffers to dmesg.

Good.  ;-)

> > If all of that is in place and no joy, is it possible to extract the
> > ftrace buffer from the running/hung guest?  It should be in there
> > somewhere!  ;-)
> 
> I know of only virtio-console doing this (via userspace only,
> though).

As in userspace within the guest?  That would not work.  The userspace
that the qemu is running in might.  There is a way to extract ftrace info
from crash dumps, so one approach would be "sendkey alt-sysrq-c", then
pull the buffer from the resulting dump.  For all I know, there might also
be some script that uses the qemu "x" command to get at the ftrace buffer.

Again, I cannot reproduce this, and I have been through the code several
times over the past few days, and am not seeing it.  I could start
sending you random diagnostic patches, but it would be much better if
we could get the trace data from the failure.

                                                        Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to