> There are two interfaces in X that have to be built: one in which the touch
> screen is being used as the primary pointer, and then some other way to
> deal with it in all its glory, almost certainly via the X Input extension.

I've been thinking more about input recently, and it's less than clear 
that the input extension is sufficient for pen-based devices.  Because the 
input device is so limited, I believe that a significant amount of policy 
will be required to distinguish between gestures intended as text input 
and gestures intended for pointer input.

In addition, we need support for a digital ink "overlay". One of the
important HWR lessons I learned from the Itsy was that graffiti with
feedback is much more reliable than without.

It's almost as if a separate app should be reading the touch screen and 
selectively synthesizing pointer and keyboard events; drawing digital 
ink on the screen to help make HWR usable.  Perhaps a portion of this app 
could be done within the X server using a grab/queuing mechanism similar 
to the core grabs.

Here's an idea -- have pen-down events cause raw input to be directed at
the external agent and simultaneously queued within the server.  The agent
could then monitor the gesture and at some point decide whether to echo
with digital ink, and finally decide whether to forward the entire queued
event stream on to the X pointer input queue or send it along for HWR. The
server could echo the ink itself or we could let the agent generate X
protocol for that.

[EMAIL PROTECTED]         XFree86 Core Team       SuSE, Inc.



unsubscribe: body of `unsubscribe linux-arm' to [EMAIL PROTECTED]
++        Please use [EMAIL PROTECTED] for           ++
++                        kernel-related discussions.                      ++

Reply via email to