Assuming that we should not check all CPUs. And in this case perhaps we can
even do something like
if (!delayed_work_pending()
get_wq_data()-current_work != dwork)
return;
but this needs barriers, and run_workqueue() needs mb__before_clear_bit().
Linus
On Wed, 14 Oct 2009 14:02:10 -0700 (PDT)
Linus Torvalds torva...@linux-foundation.org wrote:
On Wed, 14 Oct 2009, Boyan wrote:
Works for me. I couldn't reproduce the problem with only this patch on
top of 2.6.31.4.
So just to verify: both the flush_to_ldisc() patch _and_ the
On Mon, 12 Oct 2009 16:46:41 -0700
Dmitry Torokhov dmitry.torok...@gmail.com wrote:
On Tue, Oct 13, 2009 at 12:38:41AM +0100, Alan Cox wrote:
So it seems likely to me that this is a kernel bug, somewhere, and the
TTY layer seems like a good place to look (OK, a horrible place
What I was pointing out is that there are a lot of
tty_buffer_request_room() calls, and as far as I can see, all of them
(or at least a large percentage) are just pure and utter crap.
Almost certainly. When the original conversion was done all the code
which tried to peer into the flip
But flush_to_ldisc() itself has a real oddity: it uses tty-buf.lock to
protect everything, BUT NOT THE ACTUAL CALL TO -receive_buf()!
Indeed or it deadlocks
Anyway, the above explanation feels right. It would easily explain the
behavior, because if the -receive_buf() calls get re-ordered,
And no, I'm not sure my patch helps. I'd have expected
'tty_buffer_flush()' to be something very rare, for example. But I also
didn't really check if we may do it some other way.
It is rare for most applications
But I _am_ sure that it makes the code a whole lot more straightforward.
I can't help feeling a mutex might be simpler. It would also then fix
tiocsti() which is most definitely broken right now and documented as
racing.
Hmm. Those tty's have too many different locks already.
But maybe we could just have one generic mutex, and use it for termios and
IO
So it seems likely to me that this is a kernel bug, somewhere, and the
TTY layer seems like a good place to look (OK, a horrible place, but a
*likely* place).
Somewhere around 2.6.29-30 various things went funny in the keyboard
layer for me - notably characters bleeding across console
On Mon, 12 Oct 2009 00:22:05 +0200 (CEST)
Rafael J. Wysocki r...@sisk.pl wrote:
This message has been generated automatically as a part of a report
of recent regressions.
The following bug entry is on the current list of known regressions
from 2.6.31. Please verify if it still should be
So, it will use the 64kb limit at least few paths, and I'm not sure
though, non-n_tty path (e.g. ppp) doesn't use tty_write_room() check
always. It may not be consistent if we removed pty_space() in pty_write().
The correct behaviour for most network protocols to overflow is to drop
packets so
And if it _doesn't_ fix it, then I think we'll just have to revert the
commits in question. We won't have time to root-cause it if the above
isn't it.
In which case ppp will no longer work properly in some cases (ditto
other protocols) and things like the pppoe gateway wont work as they
After writing the above, the voices in my head started clamoring about
this space allocated vs bytes buffered thing, which I was obviously
aware of, but hadn't thought about as an issue.
And you know what? The thing about space allocated vs bytes buffered
is that writing _one_ byte (the
On Sun, 26 Jul 2009 22:45:29 +0200 (CEST)
Rafael J. Wysocki r...@sisk.pl wrote:
This message has been generated automatically as a part of a report
of regressions introduced between 2.6.29 and 2.6.30.
The following bug entry is on the current list of known regressions
introduced between
Another thing to check is whether this patch (not yet merged) fixes it:
http://marc.info/?l=linux-usbm=124825571403844w=2
No, unfortunately it does not fix it. I've just tested it.
I wouldn't expect it to.
In the ppp case you have this occuring on an unplug
USB layer
And the fact that is, reverting your commit made things work under
the same testing conditions.
Do you have any suggestions as to what I should do between unplugging
and plugging it back?
Got it - if the port is opened twice (eg by two apps) it does this.
Reproduced and testing a fix.
On Mon, 29 Jun 2009 01:51:09 +0200 (CEST)
Rafael J. Wysocki r...@sisk.pl wrote:
This message has been generated automatically as a part of a report
of recent regressions.
The following bug entry is on the current list of known regressions
from 2.6.30. Please verify if it still should be
BUG is still here...
kernel: [ 7331.719657] EIP is at _spin_unlock_irqrestore+0x16/0x30
Which doesn't reschedule - sorry at this point I can't help you any
further. The traces make no sense.
--
To unsubscribe from this list: send the line unsubscribe kernel-testers in
the body of a message to
Rename the pagerange_is_ram() to pat_pagerange_is_ram() and add the
track legacy 1MB region as non RAM condition.
But the lowest 640K are most definitely RAM.
--
To unsubscribe from this list: send the line unsubscribe kernel-testers in
the body of a message to majord...@vger.kernel.org
On Sun, 22 Mar 2009 13:37:35 +0100
Markus m4rkus...@web.de wrote:
This message has been generated automatically as a part of a report
of recent regressions.
The following bug entry is on the current list of known regressions
from 2.6.28. Please verify if it still should be listed and
On Sun, 25 Jan 2009 22:25:59 -0800
Larry Baker ba...@usgs.gov wrote:
Try a different, preferably new, cable. Use an 80 conductor high-
speed cable if you can. Also, check the pins to make sure none of
them are bent.
A bad cable would give you CRC errors, uncorrectable error is a drive
On Mon, 26 Jan 2009 02:09:56 +0530
Jaswinder Singh Rajput jaswinderli...@gmail.com wrote:
Hello all,
I added extra IDE drive I get similar messages like below so I replace
it another drive but I am still getting similar error messages on
2.6.29-rc2-tip:
hdb: status error: status=0x59 {
external BMC (TCO) (loaded by external BMC from the SMBus) and one for
software only (not used by hardware). Some chips only support #1 (and
#4 of course).
We have had historic problems where a very non standard EEPROM setup on
some ancient thinkpads ended up with bad stuff happening due to
You have a good point that aiming at 4kB makes 8kB a very safe choice.
Not really no - we use separate IRQ stacks in 4K but not 8K mode on
x86-32. That means you've actually got no more space if you are unlucky
with the timing of events. The 8K mode is merely harder to debug.
If 4K stacks
What about deep call chains? The problem with the uptake of 4K stacks
seems to be that is not reliably provable that it will work under all
circumstances.
On x86-32 with 8K stacks your IRQ paths share them so that is even harder
to prove (not that you can prove any of them) and the bugs are
By your logic though, XFS on x86 should work fine with 4K stacks -
many will attest that it does not and blows up due to stack issues.
I have first hand experiences of things blowing up with deep call
chains when using 4K stacks where 8K worked just fine on same
workload.
So there is
We need to fix this.
Why not just revert the offending change and try again during the next
merge window, assuming someone has figured out an acceptable way to
handle this mess by then?
Easier just to fix it. Its a case of building everything until it
compiles with the prototype change.
Btw, why is unlocked_ioctl returning long? Does anybody depend on that
too? That's another difference between the unlocked and the traditional
version..
I don't know - a lot of syscall returns got defined as long and I guess
someone thought propogating the right type was a good diea ?
As
The issue is fiddly but reproducible. All help in pinpointing the
problem source is appreciated.
For the kernel bisect if you get stuck at a point it fails remember that
point and then lie either yes/no to it working and carry on. If need be
you can go back the other way.
Another completely
28 matches
Mail list logo