/git/torvalds/linux-2.6.git;a=commit;h=4f84e4be53a04a65d97bf0faa0c8f99e29bc0170
[2] http://investor.google.com/conduct.html
--
From: Joshua Wise <[EMAIL PROTECTED]>
Background:
In some situations, mce_log would race against mce_read and deadlock. This
race condition is described in more
/git/torvalds/linux-2.6.git;a=commit;h=4f84e4be53a04a65d97bf0faa0c8f99e29bc0170
[2] http://investor.google.com/conduct.html
--
From: Joshua Wise [EMAIL PROTECTED]
Background:
In some situations, mce_log would race against mce_read and deadlock. This
race condition is described in more detail
Hopefully this patch has not been munged by pine; I have, minimally,
unchecked the mung-patches-sent-to-lkml option in pine's config. In the case
that it has been munged, I have also attached it.
--
From: Joshua Wise <[EMAIL PROTECTED]>
Background:
This patch is a follow-on to &quo
Hopefully this patch has not been munged by pine; I have, minimally,
unchecked the mung-patches-sent-to-lkml option in pine's config. In the case
that it has been munged, I have also attached it.
--
From: Joshua Wise [EMAIL PROTECTED]
Background:
This patch is a follow-on to Info dump on Oops
On Thu, 28 Jun 2007, Andrew Morton wrote:
Your email client is doing space-stuffing. It's easy enough to fix at this
end, but even easier if you fix it ;)
Aw darn :( Stupid PINE. I'll fix it for the next patch.
+ atomic_notifier_call_chain(_dumper_list, 0, NULL);
[...]
So... Please
.
Okay, fair enough. Fixed version follows. I also fixed some checkpatch
issues that I missed before.
However, please note that in general, the info dumped by this will consist
of only a few lines. In our implementations, I believe that we only dump one
line per notifier.
joshua
--
From: Joshua
From: Joshua Wise <[EMAIL PROTECTED]>
Background:
When managing a large number of servers, as Google does, it's sometimes
useful to get an "at-a-glance" view of a machine when it crashes. When no
other post-mortem is possible, it's often useful to know how long the
machine
From: Joshua Wise [EMAIL PROTECTED]
Background:
When managing a large number of servers, as Google does, it's sometimes
useful to get an at-a-glance view of a machine when it crashes. When no
other post-mortem is possible, it's often useful to know how long the
machine has been powered
PROTECTED] and
Masoud Sharbiani [EMAIL PROTECTED].
Changelog:
v2 -- fixed some checkpatch issues. Moved dumping to before Oopses, as per
the suggestion of Kyle McMartin [EMAIL PROTECTED].
Patch:
This patch is against git 48d8d7ee5dd17c64833e0343ab4ae8ef01cc2648.
Signed-off-by: Joshua Wise [EMAIL
On Thu, 28 Jun 2007, Andrew Morton wrote:
Your email client is doing space-stuffing. It's easy enough to fix at this
end, but even easier if you fix it ;)
Aw darn :( Stupid PINE. I'll fix it for the next patch.
+ atomic_notifier_call_chain(info_dumper_list, 0, NULL);
[...]
So...
From: Joshua Wise <[EMAIL PROTECTED]>
Background:
When a userspace application wants to know about machine check events, it
opens /dev/mcelog and does a read(). Usually, we found that this interface
works well, but in some cases, when the system was taking large numbers of
machine
From: Joshua Wise [EMAIL PROTECTED]
Background:
When a userspace application wants to know about machine check events, it
opens /dev/mcelog and does a read(). Usually, we found that this interface
works well, but in some cases, when the system was taking large numbers of
machine check
On Tue, 17 Apr 2007, Shaohua Li wrote:
Looks there is init order issue of sysfs files. The new refreshed patch
should fix your bug.
Yes, that did fix the hang on resume from STR -- that now works fine.
However:
[EMAIL PROTECTED]:/sys/devices/system/cpu/cpuidle$ cat available_drivers
On Tue, 17 Apr 2007, Shaohua Li wrote:
Looks there is init order issue of sysfs files. The new refreshed patch
should fix your bug.
Yes, that did fix the hang on resume from STR -- that now works fine.
However:
[EMAIL PROTECTED]:/sys/devices/system/cpu/cpuidle$ cat available_drivers
On Mon, 16 Apr 2007, Shaohua Li wrote:
On Sat, 2007-04-14 at 01:45 +0200, Mattia Dongili wrote:
...
please check if the patch at
http://marc.info/?l=linux-acpi=117523651630038=2 fixed the issue
I have the same system as Mattia, and when I applied this patch and turned
CPU_IDLE back on, I got
On Mon, 16 Apr 2007, Shaohua Li wrote:
On Sat, 2007-04-14 at 01:45 +0200, Mattia Dongili wrote:
...
please check if the patch at
http://marc.info/?l=linux-acpim=117523651630038w=2 fixed the issue
I have the same system as Mattia, and when I applied this patch and turned
CPU_IDLE back on, I
On Wednesday 17 August 2005 12:43, Stephen Hemminger wrote:
> You will get more response to network issues on netdev@vger.kernel.org
Okay. Thanks.
> NAPI poll is usually called from softirq context. This means that
> hardware interrupts are enabled, but it is not in a thread context that
> can
[] start_kernel+0x44c/0x4e8
Apologies for any inconvenience.
joshua
On Wednesday 17 August 2005 09:32, Joshua Wise wrote:
> Hello LKML,
>
> I have recently been working on a network driver for an emulated
> ultra-simple network card, and I've run into a few snags with the NAPI. My
>
Hello LKML,
I have recently been working on a network driver for an emulated ultra-simple
network card, and I've run into a few snags with the NAPI. My current issue
is that it seems to me that my poll routine is being called from an atomic
context, so when poll calls rx, and rx calls
Hello LKML,
I have recently been working on a network driver for an emulated ultra-simple
network card, and I've run into a few snags with the NAPI. My current issue
is that it seems to me that my poll routine is being called from an atomic
context, so when poll calls rx, and rx calls
[8037fcf4] start_kernel+0x44c/0x4e8
Apologies for any inconvenience.
joshua
On Wednesday 17 August 2005 09:32, Joshua Wise wrote:
Hello LKML,
I have recently been working on a network driver for an emulated
ultra-simple network card, and I've run into a few snags with the NAPI. My
On Wednesday 17 August 2005 12:43, Stephen Hemminger wrote:
You will get more response to network issues on netdev@vger.kernel.org
Okay. Thanks.
NAPI poll is usually called from softirq context. This means that
hardware interrupts are enabled, but it is not in a thread context that
can
22 matches
Mail list logo