Hi Anders,
On 19/09/2005 15:09, you wrote:
On 19/09/2005 11:52, Anders Blomdell wrote:
Hi,
I have an application the relies on setting the serial port into
low latency mode, but this doesn't work well with ftdi_sio.c, because:
1. setting low latency (priv->flags & ASYNC_LOW_LATENCY) requires
CAP_SYS_ADMIN
That has always been the case with Linux serial drivers.
Weird, I have the impression that I (as ordinary user, with RW access)
have gotten lower latency when setting a port to low latency (probably
2.0.36 kernel if memory serves me right), if I remember correctly,
c_cc[VTIME]=0, c_cc[VMIN] = 1 was not enough back then.
Yes, the normal serial driver turns low latency off by default, so
turning it on would help. But you still needed CAP_SYS_ADMIN to turn it
on, usually with the setserial program and some sort of start-up script.
(Actually, 2.0.36 doesn't implement capabilities so it uses 'suser()'
instead of 'capable(CAP_SYS_ADMIN)'.)
Of course in the old days, you couldn't unplug the serial ports, so
initializing them at system start-up was good enough. These days, for
USB serial ports it should be possible to do something similar with
hotplug scripts or udev rules or something everytime the device is
plugged in, but I've never tried it.
2. setting low latency doesn't affect the latency timer
(FTDI_SIO_SET_LATENCY_TIMER_REQUEST)
(Note for regulars: this is a hardware timer that tells the FTDI chip
how often to report received data.)
I guess it could be argued that there should be some link between the
two. Then again, it could also be argued that there should be some
link with c_cc[VTIME] in the termios structure in non-canonical mode
(though that has too large a range and too low a resolution to be of
much use here).
OK, should I implement that [as well/instead]?
c_cc[VTIME] == 0 -> 1 ms
but should it be:
== 1 -> 10 ms
...
== 24 -> 240 ms
>= 25 -> 250 ms
or:
-> 16 ms otherwise
It's worse than that. The value is in deciseconds, so 1 corresponds to
100 ms. That's why I didn't think it was particularly suitable!
Because of that, it would only be worth treating c_cc[VTIME] == 0 as
special. i.e.:
c_cc[VTIME] == 0 -> 1 ms
c_cc[VTIME] != 0 -> 16 ms ('default' ms -- see below)
but note that c_cc[VTIME] only applies when (c_lflag & ICANON) == 0
(i.e. in non-canonical mode). In canonical mode it could default back
to using 16 ms latency timer.
Actually, instead of using 16 ms as the default, the default should be
as set using the 'latency_timer' sysfs parameter. That isn't stored
anywhere currently (except in the hardware!) but it would be easy enough
to store it in the port's private data. In fact, if the 'latency_timer'
sysfs parameter sets the default value of the latency timer, the
'store_latency_timer' function should only touch the hardware if the
latency timer is in the 'default' state.
There's still a bit of a grey area about the best combination of the
low_latency flag and the VTIME parameter to control the hardware latency
timer, especially as the default hardware latency of 16 ms is typically
longer than the equivalent hardware latency for a regular serial port
(at least for baud rates above 2400 baud). One possible rule would be:
if the low_latency flag is set
AND the tty is in non-canonical mode
AND the VTIME parameter is 0
then minimise hardware latency
else use default hardware latency.
Justification for the above:
1. if low_latency flag isn't set we don't care about hardware latency.
2. if we're in canonical mode, assume theres a human on the other end.
3. VTIME only applies in non-canonical mode.
4. Any VTIME value greater than 0 corresponds to at least 100 ms, so we
don't care about low hardware latency.
The submitted patch makes:
1. The driver start in normal latency mode (previously low latency)
I'd rather it not do that. Low latency mode has some advantages that
would be lost by default with this patch.
OK which (except low latency), [I'm curious].
It mostly affects when (and as a result of scheduling, how often) the
tty flip buffer contents are processed after receiving data. In
low-latency mode the contents are processed immediately, but in
normal-latency mode a task was scheduled to process the contents. At
high receive rates, this could result in data being thrown away because
the flip buffer hadn't flipped yet. (This can still be a problem even
in low-latency mode when data is received in large chunks, but I won't
go into that just now; the ftdi_sio driver has some code to deal with it
as long as some flow control is in effect.)
(Well in fact, setting low latency by default in the USB serial drivers
had a big disadvantage on older 2.4 kernels as it quite often resulted
in a kernel "Oops" when data was received and processed at the "wrong"
time (when the USB port semaphore was already in use). But I think that
problem has been worked around now.)
2. Let any user change latency (i.e. no CAP_SYS_ADMIN required)
That goes against the grain of existing Linux serial drivers.
OK, so low latency has such far reaching consequences.
The reasons are probably lost in the mists of time, but I guess
low_latency mode was considered more dangerous.
3. Setting low latency sets the latency timer to 1 ms, normal
latency sets the
latency timer to 16 ms (chip default).
There is some merit in that, but keeping the port initialixed to low
latency mode as I recommend would result in an increased overhead as
data may be received in shorter chunks. This would reduce the
maximum throughput. So it boils down to whether the port should be
initialized for lowest latency or maximum throughput by default.
Note that there is an "out of band" solution to tweaking the FTDI
chip's latency timer via a sysfs parameter.
I know, but then applications need to be cluttered with that as well,
since my application both sets
low latency as well as c_cc[VTIME]=0, c_cc[VMIN] = 1, that would be a
better solution (for me), since the same code could be used on any
serial port.
I think the intention is to set it from some sort of initialization
script that is run when the device is connected.
As mentioned above, I wouldn't expect changing the low_latency setting
to work on "any" serial port unless the process has the CAP_SYS_ADMIN
capability.
With the patch applied, low latency mode gives a delay of about 3-4
ms in contrast
to the 16 ms normally encountered.
Just wondering: as you still get a latency of 3-4 ms, have you tried
setting the FTDI chip's latency timer to a similar number to see how
much difference it makes?
Any setting higher or equal than 3 makes latency worse, 1 & 2 are
roughly equivalent.
Thanks for the info. You might as well stick with 1 when setting the
hardware timer for low_latency. I'm still worried about the implication
for applications that receive a lot of data at high speed, but it's
difficult to please everybody. Maybe we need an extra sysfs parameter
so that both hardware latency timer values can be tweaked, e.g.
'latency_timer' and 'low_latency_timer'?
--
-=( Ian Abbott @ MEV Ltd. E-mail: <[EMAIL PROTECTED]> )=-
-=( Tel: +44 (0)161 477 1898 FAX: +44 (0)161 718 3587 )=-
-------------------------------------------------------
SF.Net email is sponsored by:
Tame your development challenges with Apache's Geronimo App Server. Download
it for free - -and be entered to win a 42" plasma tv or your very own
Sony(tm)PSP. Click here to play: http://sourceforge.net/geronimo.php
_______________________________________________
[email protected]
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel