On 08/19/2010 01:10 AM, Peter Hutterer wrote: [...] > I don't think the algorithm is what's holding you back anyway, it's the
> nature of gestures and human input in general.Even if you GE is > instantaneous in the recognition, you may not know for N milliseconds if the > given input may even translate into a gesture. Example: middle mouse button > emulation code - you can't solve it without a timeout. Indeed, finding out the user intent is the real problem. Let me take this opportunity to clarify some things that has been discussed in the thread so far. The overall gesture design handles multi-user gestures in a multi-pointer environment. Finding out how to group touches into tentative gestures primitives is the main work done by the gesture recognizer. We want this process to be as speedy and as context-free as possible, to ensure consistent handling of gestures across applications. Since real gestures depend on context such as application and window structure, the recognizer passes a set of "alternative futures" to the gesture instantiator, which knows enough about the actual window situation to be able to deliver real gesture primitives to real applications. Having several gestures of the same kind going on simultaneously at different parts of the screen is not a problem. As Chase already mentioned, we aim to be as independent of X as possible. It turns out that the propagation of gesture primitives is so tightly connected to core event propagation that we feel adding it to the server is the only sane solution. Cheers, Henrik _______________________________________________ [email protected]: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
