On Wed, Jul 16, 2014 at 3:25 AM, Daniel Freedman <[email protected]> wrote:
> Hey Tom, I'm not sure I understand all of your question. > > > On Mon, Jul 14, 2014 at 2:51 AM, Tom Wiltzius <[email protected]> > wrote: > >> I have a naive question about input events-- to what extent is the Polymer >> input events library >> <http://www.polymer-project.org/docs/polymer/touch.html> meant to >> obviate gesture disambiguation in script? >> >> Specifically, if I have an element that I'm interested in taps and swipes >> on, I can listen to both the tap and the trackstart/track/trackend event >> with Polymer's gesture event library. But the tap gesture fires after the >> touch up whether the user tracked in the middle or not. >> > > This is because we made tap have this behavior. A tap will always fire on > the deepest part of the DOM that contains the start and the end position. > You can disable this by calling "preventTap" on any gesture event, like > track. > Got it, that makes sense. > > >> >> It isn't too painful to stop listening for other events when a track >> begins, but in an ideal world we'd lean on the browser to do disambiguation >> of all gestures that the gesture library exposes -- it needs to do them >> anyway for browser UI (like long press and scrolling), and this provides >> much neater encapsulation of behavior for components (the various gesture >> handlers don't need to know about one another). >> > > This is the part I'm confused about. Do you mean to suggest that the > gesture library should be subsumed by the platform, or that there should be > some sort of exposed low level hooks that a library could use to make a set > of gestures? > Either one would be fine, but in an ideal world neither the web application nor the framework (i.e. Polymer) would need to register for JS events that it doesn't actually need (from a really simple efficiency standpoint). Folks on input-dev can correct my understanding if its wrong, but there's already a first-class notion of a "tap" gesture in Chrome. I don't think we expose it, but it's there (eating up resources), so it would be really nice to take advantage of it. There's also a gesture recognizer in the browser that decides whether you're e.g. scrolling vs tapping. It seems a real shame to have this logic running every time the user touches the screen, only to have all of that logic re-done (potentially with slight differences in behavior) in JS. > >> >> Do we have a path to making that happen? >> >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> > > Follow Polymer on Google+: plus.google.com/107187849809354688692 --- You received this message because you are subscribed to the Google Groups "Polymer" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/polymer-dev/CAK-G-KXCyaRgyOphaPAzGMqVHWj4dgXVsoZ559c-WbJtKspOzQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
