James Carrington wrote:
From our Windows 7 application experience, I would like to echo the
need for sophisticated applications needing their own gesture
recognition capability and access to raw touch data.
For example SpaceClaim Engineer (a multi-touch CAD app on Windows) has
dozens, perhaps going on hundreds, of unique gestures it recognizes.
They also use combinations of pen & touch in innovative ways which
motivates them to want raw HID data from both touch and pen
There won’t be a lot of these applications, but the ones that do will
really show the advantages of multi-touch (and pen) and should be
supported IMHO.
IMHO a good (or ideal) gesture engine should be able to support complex
gestures involving multiple input devices with varied modes of physical
expression. And an ideal engine will also enable applications to
express and register their desired gestures.
I do think those are some lofty goals and don't expect to see such an
ideal engine any time soon, but I think its a goal worth striving for.
James Carrington
N-trig
*From:*
multi-touch-dev-bounces+james.carrington=n-trig....@lists.launchpad.net
[mailto:multi-touch-dev-bounces+james.carrington=n-trig....@lists.launchpad.net]
*On Behalf Of *Mark Shuttleworth
*Sent:* Wednesday, October 06, 2010 7:51 AM
*To:* Peter Hutterer
*Cc:* multi-touch-dev
*Subject:* Re: [Multi-touch-dev] Peter Hutterer's thoughts on MT in X
On 06/10/10 02:45, Peter Hutterer wrote:
This smacks of the old X inability to make a decision and commit to a
direction.
[citation needed]
From http://www.faqs.org/docs/Linux-HOWTO/XWindow-Overview-HOWTO.html
"One of X's fundamental tenets is "we provide mechanism, but not
policy". So, while the X server provides a way (mechanism) for window
manipulation, it doesn't actually say how this manipulation behaves
(policy)."
;-)
But I predict that sooner or later, we'll see a second and third
engine emerge, maybe for an app that needs really specialised gestures.
Competition could be a good thing if there's a deliberate effort to
share and work towards common interfaces and consistency where possible,
particularly for overlapping functionality. It would be nice to avoid
holy wars that hurt ordinary users.
Perhaps it would be worthwhile to have a clear set of guidelines.
Specify what your requirements are for a primary engine, and what
secondary and tertiary engines should avoid breaking. At least if they
want to be compatible with Ubuntu. Even if in the end such specs don't
result in good competitive alternatives, they will clarify the thinking
and philosophy for uTouch.
I agree, and I can think of use cases that support that, for example CAD
applications.
Where I disagree with your speculation is the idea that it would be a
good thing to support multiple gesture engines for different tookits. By
definition, a toolkit is general-purpose, and maps to a whole portfolio
of apps. XUL, Qt, Gtk are examples. Having different engines there means
that whole sets of apps will behave differently, depending on their
toolkit. And it means that improvements, such as latency and signal
processing work, are diluted across all the toolkits - bad for the user.
Agreed. My understanding is that QT supports using native engines when
available and possibly an internal engine as a fall back (is that
correct?) We should encourage the other toolkits to adopt a similar policy.
That's quite different to the idea that a CAD app might invest in some
highly specialised and unique gesture processing. As soon as it's
"competing toolkit engines" you're in a world of user pain.
If an ideal engine is developed, would there be any resistance from
complex application for non-technical reasons? Would commercial
applications be afraid of spilling proprietary IP by engaging a
centralized engine?
We've seen this before, and since this is a new area we can avoid it. We
ship too many spelling checkers already :)
And yet we all still make spelling mistacks.
We can be very clear about this: Ubuntu won't support
multiple simultaneous competing gesture engines.
How does this work out if an application decides to interpret raw touch data
into gestures by itself? That would be a competing gesture engine then.
If the use is defensible, encourage it, if it's not, patch it out or
deprecate the app.
Mark
In the short run, we can't offer an ideal engine. It would be draconian
to limit functionality of third party applications to protect our
vision. Permitting applications to request straight up MT coordinate
data (as a gesture or by other means), is a way to remain flexible and
lower the barriers to adoption.
As you pointed out, you can decide after the fact if the app is being
clever or stupid. Published guidelines might help prevent such
conflicts, and might even help the app developers realize they should
use or improve existing engines instead of starting from scratch.
(more carrot less stick)
I know its unlikely, but there is a chance some of those application
developers might even come around and contribute some of that third
party code to the aid the rest of the community.
Rafi
_______________________________________________
Mailing list: https://launchpad.net/~multi-touch-dev
Post to : multi-touch-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~multi-touch-dev
More help : https://help.launchpad.net/ListHelp