Hi,

first of all I'm sorry for the late response, just now I found time to
work on this.

On Wed, Mar 31, 2010 at 8:04 AM, Grant Likely <grant.lik...@secretlab.ca> wrote:

> Is checking the T bit really necessary if Linux instead tracks the
> timing between bytes internally?  ie.  If the driver uses an hrtimer
> to schedule the submission of SPI transfers such that the time between
> SPI submissions is always greater than the time required for a serial
> character to be transmitted?

I think you have to check it anyway. For example the SPI bus may be
shared with another device so we don't know when our char will be sent
(it might be delayed for more than the duration of a char being sent
on the serial line if the initial execution of the SPI command is
delayed). But using a hrtimer will be for sure more fair than polling
T bit as far as resource usage is concerned. I was always hesitant
about using hrtimers: I really don't know if all platforms support
them with the needed granularity (at 115200 a char takes around 100us)
and the aren't many users of them in the drivers directory (quite all
of them are in the staging one). But it's definitely a good idea if
hrtimers do work. I'll make some tests.

>
> You may be able to set this up into a free-running state machine.
> Submit the SPI transfers asynchronously, and use a callback to be
> notified when the transfer is completed.  In most cases, the transfer
> will be completed every time, but if it is not, then the state machine
> can just wait another time period before submitting.  By doing it this
> way, all the work can be handled at the atomic context without any
> workqueues or threaded IRQ handlers.
>

yes a completely async design could improve performance (the greatest
culprit for low performance (not mentioning slow SPI master drivers)
is the latency in the delayed work being started). When I first wrote
this driver I wanted to keep it simple so I was a bit frightened by a
state-machine like design, but it can be done for sure. My concern
here is: everything is the kernel is moving to doing as much as
possible in a delayed work mechanics (see the introduction of threaded
interrupts (which could became the default) or the "coming soon" death
of IRQF_DISABLED). Is doing a big part of the work (of course I would
use spi_async directly in the interrupt handler and handle the
incoming/outgoing chars in spi_async callback which is usually called
in an interrupt context) in an interrupt context "antihistorical",
isn't it?

BTW: both of the design changes you mentioned seem sensible to me for
better performance of the driver. But they don't do any form of
batching and won't help if the underlying SPI master driver uses some
form of delayed work itself.

-- 
Christian Pellegrin, see http://www.evolware.org/chri/
"Real Programmers don't play tennis, or any other sport which requires
you to change clothes. Mountain climbing is OK, and Real Programmers
wear their climbing boots to work in case a mountain should suddenly
spring up in the middle of the computer room."

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
spi-devel-general mailing list
spi-devel-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/spi-devel-general

Reply via email to