On Thu, Jul 04, 2013 at 03:49:02AM -0400, Dave Jones wrote:
> I don't use the auto config, because I end up filling up /boot
> unless I go through and clean them out by hand every time I install
> a new one (which I do probably a dozen or so times a day).
> Is there some easy way to prune old
On Thu, Jul 04, 2013 at 03:49:02AM -0400, Dave Jones wrote:
I don't use the auto config, because I end up filling up /boot
unless I go through and clean them out by hand every time I install
a new one (which I do probably a dozen or so times a day).
Is there some easy way to prune old builds
* Frederic Weisbecker wrote:
> On Fri, Jul 05, 2013 at 12:27:51PM -0700, Linus Torvalds wrote:
> > On Thu, Jul 4, 2013 at 11:51 PM, Ingo Molnar wrote:
> > >
> > > It would be nice to have a full-kernel-stack backtrace printout as an
> > > option, which prints entries 'above' the current RSP.
>
On Fri, Jul 05, 2013 at 12:27:51PM -0700, Linus Torvalds wrote:
> On Thu, Jul 4, 2013 at 11:51 PM, Ingo Molnar wrote:
> >
> > It would be nice to have a full-kernel-stack backtrace printout as an
> > option, which prints entries 'above' the current RSP.
>
> One problem with that is that it is
On Fri, Jul 05, 2013 at 12:27:51PM -0700, Linus Torvalds wrote:
On Thu, Jul 4, 2013 at 11:51 PM, Ingo Molnar mi...@kernel.org wrote:
It would be nice to have a full-kernel-stack backtrace printout as an
option, which prints entries 'above' the current RSP.
One problem with that is that
* Frederic Weisbecker fweis...@gmail.com wrote:
On Fri, Jul 05, 2013 at 12:27:51PM -0700, Linus Torvalds wrote:
On Thu, Jul 4, 2013 at 11:51 PM, Ingo Molnar mi...@kernel.org wrote:
It would be nice to have a full-kernel-stack backtrace printout as an
option, which prints entries
On Thu, Jul 4, 2013 at 11:51 PM, Ingo Molnar wrote:
>
> It would be nice to have a full-kernel-stack backtrace printout as an
> option, which prints entries 'above' the current RSP.
One problem with that is that it is likely to be mostly overwritten by
the debug code that is about to print this
* Frederic Weisbecker wrote:
> On Fri, Jul 05, 2013 at 08:51:13AM +0200, Ingo Molnar wrote:
> >
> > * H. Peter Anvin wrote:
> >
> > > On 07/03/2013 07:49 PM, Linus Torvalds wrote:
> > > >> [] __schedule+0x94f/0x9c0
> > > >> [] schedule_user+0x2e/0x70
> > > >> [] retint_careful+0x12/0x2e
> >
On Fri, Jul 05, 2013 at 08:51:13AM +0200, Ingo Molnar wrote:
>
> * H. Peter Anvin wrote:
>
> > On 07/03/2013 07:49 PM, Linus Torvalds wrote:
> > >> [] __schedule+0x94f/0x9c0
> > >> [] schedule_user+0x2e/0x70
> > >> [] retint_careful+0x12/0x2e
> >
> > This call trace does indeed indicate that
* H. Peter Anvin wrote:
> On 07/03/2013 07:49 PM, Linus Torvalds wrote:
> >> [] __schedule+0x94f/0x9c0
> >> [] schedule_user+0x2e/0x70
> >> [] retint_careful+0x12/0x2e
>
> This call trace does indeed indicate that we took a hardware
> interrupt which caused a reschedule. It doesn't
* H. Peter Anvin h...@zytor.com wrote:
On 07/03/2013 07:49 PM, Linus Torvalds wrote:
[816f42bf] __schedule+0x94f/0x9c0
[816f487e] schedule_user+0x2e/0x70
[816f6de4] retint_careful+0x12/0x2e
This call trace does indeed indicate that we took a hardware
interrupt
On Fri, Jul 05, 2013 at 08:51:13AM +0200, Ingo Molnar wrote:
* H. Peter Anvin h...@zytor.com wrote:
On 07/03/2013 07:49 PM, Linus Torvalds wrote:
[816f42bf] __schedule+0x94f/0x9c0
[816f487e] schedule_user+0x2e/0x70
[816f6de4] retint_careful+0x12/0x2e
* Frederic Weisbecker fweis...@gmail.com wrote:
On Fri, Jul 05, 2013 at 08:51:13AM +0200, Ingo Molnar wrote:
* H. Peter Anvin h...@zytor.com wrote:
On 07/03/2013 07:49 PM, Linus Torvalds wrote:
[816f42bf] __schedule+0x94f/0x9c0
[816f487e]
On Thu, Jul 4, 2013 at 11:51 PM, Ingo Molnar mi...@kernel.org wrote:
It would be nice to have a full-kernel-stack backtrace printout as an
option, which prints entries 'above' the current RSP.
One problem with that is that it is likely to be mostly overwritten by
the debug code that is about
On 07/03/2013 07:49 PM, Linus Torvalds wrote:
[] __schedule+0x94f/0x9c0
[] schedule_user+0x2e/0x70
[] retint_careful+0x12/0x2e
This call trace does indeed indicate that we took a hardware interrupt
which caused a reschedule. It doesn't necessarily have to be a quantum
expiration
On Thu, Jul 4, 2013 at 12:49 AM, Dave Jones wrote:
>
> top of tree was 0b0585c3e192967cb2ef0ac0816eb8a8c8d99840 I think.
> (That's what it is on my local box that I pull all my test trees from,
> and I don't think it changed after I started that run, but I'll
> double check on Friday)
Ok. So
On Wed, Jul 03, 2013 at 07:49:18PM -0700, Linus Torvalds wrote:
> On Wed, Jul 3, 2013 at 6:55 PM, Dave Jones wrote:
> > This is a pretty context free trace. What the hell happened here?
>
> That lack of call trace looks like it happened at the final stage of
> an interrupt or page fault or
On Wed, Jul 03, 2013 at 07:49:18PM -0700, Linus Torvalds wrote:
On Wed, Jul 3, 2013 at 6:55 PM, Dave Jones da...@redhat.com wrote:
This is a pretty context free trace. What the hell happened here?
That lack of call trace looks like it happened at the final stage of
an interrupt or page
On Thu, Jul 4, 2013 at 12:49 AM, Dave Jones da...@redhat.com wrote:
top of tree was 0b0585c3e192967cb2ef0ac0816eb8a8c8d99840 I think.
(That's what it is on my local box that I pull all my test trees from,
and I don't think it changed after I started that run, but I'll
double check on Friday)
On 07/03/2013 07:49 PM, Linus Torvalds wrote:
[816f42bf] __schedule+0x94f/0x9c0
[816f487e] schedule_user+0x2e/0x70
[816f6de4] retint_careful+0x12/0x2e
This call trace does indeed indicate that we took a hardware interrupt
which caused a reschedule. It doesn't
I'll look harder at the backtrace tomorrow, but my guess is that the cpu has
just gotten a scheduling interrupt (time quantum expired.)
Linus Torvalds wrote:
>On Wed, Jul 3, 2013 at 6:55 PM, Dave Jones wrote:
>> This is a pretty context free trace. What the hell happened here?
>
>That lack of
On Wed, Jul 3, 2013 at 6:55 PM, Dave Jones wrote:
> This is a pretty context free trace. What the hell happened here?
That lack of call trace looks like it happened at the final stage of
an interrupt or page fault or other trap that is about to return to
user space.
My guess would be that the
This is a pretty context free trace. What the hell happened here?
BUG: scheduling while atomic: trinity-child0/13280/0xefff
INFO: lockdep is turned off.
Modules linked in: dlci dccp_ipv6 dccp_ipv4 dccp sctp bridge 8021q garp stp
snd_seq_dummy tun fuse hidp rfcomm bnep can_raw can_bcm
This is a pretty context free trace. What the hell happened here?
BUG: scheduling while atomic: trinity-child0/13280/0xefff
INFO: lockdep is turned off.
Modules linked in: dlci dccp_ipv6 dccp_ipv4 dccp sctp bridge 8021q garp stp
snd_seq_dummy tun fuse hidp rfcomm bnep can_raw can_bcm
On Wed, Jul 3, 2013 at 6:55 PM, Dave Jones da...@redhat.com wrote:
This is a pretty context free trace. What the hell happened here?
That lack of call trace looks like it happened at the final stage of
an interrupt or page fault or other trap that is about to return to
user space.
My guess
I'll look harder at the backtrace tomorrow, but my guess is that the cpu has
just gotten a scheduling interrupt (time quantum expired.)
Linus Torvalds torva...@linux-foundation.org wrote:
On Wed, Jul 3, 2013 at 6:55 PM, Dave Jones da...@redhat.com wrote:
This is a pretty context free trace.
26 matches
Mail list logo