On Tue, 11 Jul 2000, Bradley D. LaRonde wrote:
> I'm saying that I think that all interpretation and distribution of the data
> should be done in userspace, where it can be more easily engineered,
> debugged, and extended.
The extreme stand on user space is not always so good. At some point you
need a common base so the job doesn't have to be duplicated for every and
all clients.
That's the role of the kernel to present a coherent view of all
peripherals types to user space apps, wether is is a sound card, a
framebuffer device, a touchscreen panel, etc. Otherwise, we would already
manage ethernet interfaces in user space, IDE chipsets, etc.
The "all in the kernel" approach isn't good either, far from it. However
a good balance is required for not duplicating the work for all user space
clients. Since the kernel is the common point of communication for all
hardware components and all software apps, we need to use it in order not
to waste too much time on skattered user space maintenance. Thus the need
for a well defined kernel interface.
A good compromise might involve a common basic interface for presenting
data that all types of touchscreens can provide, with the possibility for
a user space app that wants more to turn the driver into a raw mode. The
same principle is currently implemented with termios (think of cooked vs
raw modes), keyboards (translated, keycodes or scancodes), etc.
This way, all windowing systems using the basic interface would work
automagically without modifications whenever a new touchscreen driver is
produced which is not the case with the "all in user space" approach.
What I would change from the original proposition is the following:
For each TS events, a struct with the x, y, pressure coordinate and time
stamp be returned with int values rather than short. This allows for a
much larger scale while limiting alignment and size problems on
architectures that doesn't support shorts naturally.
Next, some ioctls that allows the client to probe for the x scale, y scale
and pressure scale allowed by the hardware. Maybe also probe for the type
of coordinates i.e. linear, polar or whatever could be common usage. If a
certain hardware doesn't do sane coordinate and requires a special
transform then it would have to be done in the driver so it could fit into
the nearest standard coordinate type -- just like different color modes
and representations are presented through the generic framebuffer
interface.
Probably the driver would benifit from an isteresis processing which
parameters are configurable from user space, based on queried scale
ranges. This has to be done in the kernel too so the user space client is
not waken up unnecessary by the noise.
But all interpretation of coordinates in order to match screen pixels,
calibration, etc. has to be done in user space since that's what likely
to change meaning from one application to another.
Nicolas
unsubscribe: body of `unsubscribe linux-arm' to [EMAIL PROTECTED]
++ Please use [EMAIL PROTECTED] for ++
++ kernel-related discussions. ++