Hi, On Mon, Mar 01, 2010 at 04:25:48PM +0300, Artem Ananiev wrote: > On 3/1/2010 3:41 PM, Daniel Stone wrote: >> On Mon, Mar 01, 2010 at 12:42:40PM +0100, Bradley T. Hughes wrote: >>> On 03/01/2010 12:22 PM, ext Daniel Stone wrote: >>>> and so on, and so forth ... would this be useful enough to let you take >>>> multi-device rather than some unpredictable hybrid? >>> >>> It would for me, absolutely. This avoids the multi-device grab problem >>> described by Peter earlier, but I'm unsure how well it works given that >>> we still lack the user/gesture context (as described by Peter). >> >> Any suggestions? :) Reference to how OS X and/or Windows implement it >> would be welcome too. > > In a few words, Windows expects all the subsequent touch events to occur > on the same window as the first touch. If I press another window, the > corresponding WM_TOUCH notification is just skipped - not sent to the > client. I'm not sure about explicit mouse grab (SetCapture() call), > though. > > Gestures and touch events are mutually exclusive on Windows: one can > either receive WM_GESTURE or WM_TOUCH messages, but not the both. In the > latter case, I can feed the gestures engine manually, though, but again, > if I don't receive touch events for different windows, I can't make the > gestures engine recognize gestures for different windows. > > In other words, Windows doesn't bother about user/gesture context at > all. If a client needs some complex manipulations (e.g. multiple users > interacting a large touch table), the native system doesn't provide any > help for that - just listen to the low-level touch events and write your > custom gestures recognizer.
OK, thanks for that. :) So it's basically what you'd have now if you had a driver which did multitouch as multiple axes in a single device. Cheers, Daniel
pgp81RbgL1EZK.pgp
Description: PGP signature
_______________________________________________ xorg-devel mailing list [email protected] http://lists.x.org/mailman/listinfo/xorg-devel
