On 11/10/10 13:48, Mohamed Ikbel Boulabiar wrote:
>
> On Wed, Oct 6, 2010 at 5:26 PM, James Carrington
> <james.carring...@n-trig.com <mailto:james.carring...@n-trig.com>> wrote:
>
>     For example SpaceClaim Engineer (a multi-touch CAD app on Windows)
>     has dozens, perhaps going on hundreds, of unique gestures it
>     recognizes.  They also use combinations of pen & touch in
>     innovative ways which motivates them to want raw HID data from
>     both touch and pen
>
>
> How we can get hundreds of gestures without the ability of factorising
> them into sub-known-gestures as drag/pinch/rotate/.. ?
> The engine may for example be tuned to recognize gestures occurring in
> sub-areas of the screen.
> (as in the video 1finger-hold + 2finger-drag) in big screen like 3M we
> can have more than 1 user (20 fingers), so recognizing gestures by
> areas simplify that handling. (for multi-user & meta-gestures)
>
> If we have to recognize more than that it will be very context
> specific which only a minority of applications needs it.
>
> A "Grammar of gestures" defined by combination of sub-known-gestures
> in space (areas) and in time (continuation/succession/cascading)
> simplifies life than having to deal with too many gestures.
>

That's true, and it's the direction we're going in. But I think the
point about specialist apps is deeper, and we'll need to support it. We
provide a mechanism whereby the user can say to the system "don't try to
guess what I'm saying in this window here, I'm using a different language".

But all "general" toolkits will use the system gesture language, and
apps which just use the toolkit can thus all be expected to be coherent.

Mark

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Mailing list: https://launchpad.net/~multi-touch-dev
Post to     : multi-touch-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~multi-touch-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to