On Mon, 15 Feb 2016, Michal Meloun wrote:

Dne 13.02.2016 v 1:58 Marius Strobl napsal(a):
On Sat, Feb 13, 2016 at 06:53:25AM +1100, Bruce Evans wrote:
On Fri, 12 Feb 2016, Marius Strobl wrote:

On Fri, Feb 12, 2016 at 05:14:58AM +0000, Michal Meloun wrote:
Author: mmel
Date: Fri Feb 12 05:14:58 2016
New Revision: 295557
URL: https://svnweb.freebsd.org/changeset/base/295557

Log:
  UART: Fix spurious interrupts generated by ns8250 and lpc drivers:
   - don't enable transmitter empty interrupt before filling TX FIFO.

Are you sure this doesn't create a race that leads to lost TX ready
interrupts? For a single character, the TX FIFO very well may be empty
again at the point in time IER_ETXRDY is enabled now.  With the varying
behavior of 8250/16x50 chips - some of which is documented in sio(4) -

That is mostly FUD.  More likely driver bugs than chip bugs.

A non-broken xx50 interrupts after you (re)enable tx interrupts, iff
the fifo is already empty.  This gives a "spurious" interrupt.  But
perhaps depending on this is too fragile.  Normal operation is to keep
...
I'd expect there are many that no longer generate a TX ready at all
with this change in place. In this case, receiving spurious interrupts
(which ones? IIR_NOPEND? IIR_TXRDY?) with some devices appears to be
the lesser evil.

Not many.  Only broken ones.

In my experience many xx50 are broken, especially the integrated
on-board ones you still have in workstations and servers today.

I haven't seen any with this bug.  But I haven't seen many newer ones
or any in workstations or servers since I only use consumer-grade
motherboards -- maybe those are better :-).  Why would a new ASIC
reimplement bugs that were in the original 8250?  (IIRC, there was
an 8250 with lots of bugs and limited production, and an 8250A that
was better.  FreeBSD is much newer than the 8250A so it might never
have been used on an 8250.)

The "spurious" interrupts are just normal
ones from bon-broken chips:

- uart first does a potentially-unbounded busy-wait before the doing
   anything to ensure that the fifo is empty.  This should be unecessary
   since this function should not be called unless sc_txbusy is 0 and
   sc_txbusy should be nonzero if the fifo is not empty.  If it is called
   when the fifo is not emptu, then the worst-case busy-wait is approx.
   640 seconds for a 128-byte fifo at 2 bps. The 'broken_txfifo case'
   busy-waits for a long time in normal operation.
- enabling the tx interrupt causes one immediately on non-broken uarts
- the interrupt handler is normally called immediately.  Then it always
   blocks on uart_lock()
- then the main code fills the fifo and unlocks
- then the interrupt handler runs.  It normally finds that the fifo is
   not empty (since it has just been filled) and does nothing
- another tx interrupt occurs later and the interrupt handler runs again.
   It normally doesn't hit the lock again, and normally finds the fifo
   empty, so it does something.

You correctly describe what happens at r295556 with a non-broken xx50.
That revision causes a spurious interrupt with non-broken xx50 but
also ensures that the relevant TX interrupt isn't missed with broken
xx50 that do not issue an interrupt when enabling IER_ETXRDY. Besides,
as you say, the general approach of dynamically enabling TX interrupts
works around the common brokenness of these interrupts no longer going
away when they should.

But you are probably correct that a 1-byte write to the fifo often
loses the race.  This depends on how fast the hardware moves the byte
from the fifo to the tx register.  Actually, since we didn't wait
for the tx register to become empty, it will often take a full character
time before the move.  After that, I think it might take 1 bit time but
no more.

My concern is that with r295557, when this race is lost no TX interrupt
is seen at all with broken xx50 that do not trigger an interrupt when
enabling IER_ETXRDY.

Certainly that is a concern if there are chips with the bug.

No, I?m not sure, nobody can be sure if we talking about ns8250
compatible device(s). Also, all UARTs known to me, generates an
interrupt on TX unmasking (assuming level sensitive interrupt).
Only IIR can reports bad priority so some very old 8250 (if
memory still serve me).

Nah, compatible means only having bugs that are compatible, and a
noremal xx50 doesn't have this bug :-).  Strict compatibility with
the original 8250 would give lots of bugs but it is unlikely that
anything new is compatible with that.

Driver bugs are still more likely.  This reminds me that the most
likely source of them is edge triggered ISA interrupts and drivers
not doing enough in the interrupt handler to ensure getting a new
interrupt.  I think you remember correctly that bad priority was
one of the bugs in the original 8250.

I only found following scenario on multiple ARM SOCs.

Please note that ARM architecture does not have vectored interrupts,
CPU must read actual interrupt source from external interrupt
controller (GIC) register. This register contain predefined value if
none of interrupts are active.

1 - CPU1: enters ns8250_bus_transmit() and sets IER_ETXRDY.
2 - HW: UART interrupt is asserted, processed by GIC and signaled
   to CPU2.
3 - CPU2: enters interrupt service.

It is blocked by uart_lock(), right?

4 - CPU1: writes character to into REG_DATA register.
5 - HW: UART clear its interrupt request
6 - CPU2: reads interrupt source register. No active interrupt is
   found, spurious interrupt is signaled, and CPU leaves interrupted
   state.
7 - CPU1: executes uart_barrier(). This function is not empty on ARM,
   and can be slow in some cases.

It is not empty even on x86, although it probably should be.

BTW, if arm needs the barrier, then how does it work with
bus_space_barrier() referenced in just 25 files in all of /sys/dev?

On x86, bus_space_barrier is a dummy locked memory increment by 0.
Similar code for atomic ops was recently changed to null in all cases
except for atomic_thread_fence_seq_cst() where it was optimized to use
a better memory address.  atomic_thread_fence_seq_cst() is not used
by any other atomic primitive; it is used just twice in all of /sys
(in sched_ule.c).  Ordinary load/store on x86 is automatically
strongly ordered.  It is unclear what happens for memory-mapped and
i/o-mapped devices.  I think on x86, accesses to i/o-mapped devices
are ordered even more strongly than for memory (and with respect to
memory).  The reason for existence of memory-mapped devices is defeated
if their memory doesn't actually look like memory.  I think accesses to
them are ordered strongly on x86 too.  However, something even stronger
than atomic_thread_fence_seq_cst() might be needed to get an order
related to physical events -- something to flush write buffers.

i/o is slow enough even without the barriers, an on modern x86 the
time taken by the useless barrier is on the noise compared with the
time taken by the i/o.  E.g., on a 4+GHz Haswell system, for this write
loop in ns8250_bus_transmit():

X       for (i = 0; i < sc->sc_txdatasz; i++) {
X               uart_setreg(bas, REG_DATA, sc->sc_txbuf[i]);
X               uart_barrier(bas);
X       }

- uart_setreg() takes 1.5+ usec (6000+ cycles) for an ISA or PCI i/o-mapped
  bus write (plus a few more for software overheads)
- uart_barrier() takes about another 20 cycles

Why does uart use a loop with barrier after every i/o?
bus_space_write_multi_N() doesn't use any barriers internally on x86.
I'm not sure what it does on arm.  If a barrier is needed after every
i/o, then none of these 'multi' i/o functions can work in general.

My version of sio uses bus_space_write_multi_1() here.  On the same
Haswell system, this takes the same 1.5+ usec per byte if i/o-mapped, but for a PCI bus device it takes only 150 nsec per byte when memory
mapped -- 10+ times faster than for uart.  Unfortunately, PCI memory-
mapped writes are the only faster case on the Haswell system.  Older
systems are much faster.  The same serial card on a 16-year old 367 MHz
system with the PCI bus overclocked 20% can do the fast case in 125
nsec and the slow cases in 400 nsec.  125 nsec is only 45 cycles at
367 MHz.  uart's pessimizations work much better when the CPU is slower
-- now the ~20 extra cycles and a few more for other overheads is about
the same as the 45 cycles for the i/o, instead of in the noise.

8 - HW: character from THR is transferred to shift register and UART
   signals TX empty interrupt again.
9 - Goto 3.

This seems to be working as intended.

Currently, GIC interrupts service routine (see [1]) reports spurious
interrupt issue (interrupt requests disappears itself, without any HW
action). This is very valuable indicator of driver problem for us (note,
ARM needs special synchronization for related inter-device writes, see
[2]), and I don?t want to remove it.

It is indeed a valuable indicator.  I think it just detected that the
intended behaviour is a pessimizations.

On fast systems, the overhead for extra interrupts is also in the noise.
No one notices that interrupt overheads are a couple of thousands of cycles
more than they should be when a single i/o takes 6000+ cycles.

Also, at this time, UART driver is last one known to generate spurious
interrupts in ARM world.

So, what?s now? I can #ifdef __arm__ change made in r295557 (for maximum
safety), if you want this. Or we can just wait to see if someone reports
a problem ...

Use better methods.

Perhaps the detection of "extra" interrupts is not really good.  Systems
with shared interrupts can't even tell when they get "extra" interrupts.
At best their interrupt handlers poll the device status registers as
efficiently as possible.  But when reading a single device register
takes 6000+ cycles, "as efficiently as possible" is not very efficient.

Bruce
_______________________________________________
svn-src-head@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-head
To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"

Reply via email to