> >> At this time I would like to receive any issues or bug reports on this
> >> mailing list. We will transition to launchpad for bug reports once the
> >> work is in Ubuntu proper, but due to things still being in flux I think
> >> reports are better here for now.
> > 
> > There is an important difference in the handling of prolonged gestures
> > in this environment, which seems to imply that not all basic
> > application gesture needs can be met with this approach.
> > 
> > With the maverick X drivers (synaptics and multitouch), it is natural
> > to position the cursor with one finger, and then add another finger to
> > scroll a bit, life one finger to position the cursor again, and so
> > on. The same goes for zooming, where one first positions the cursor,
> > then zooms at that point.
> > 
> > With the new natty XI2.1 packages, the gesture engine releases control
> > over a touch based on a timeout, passing it on to the application in
> > such a way that the modus operandi described above no longer is
> > possible. In practise, one might need to view the server-side
> > recognition as dedicated to system gestures, and instead reimplement
> > the current application behavior on the client side.
> 
> It's not confined to system gestures, but to gestures that are clear
> from the start. For example, you can still have a two finger drag
> gesture, but you must initiate it immediately. It can't morph into a two
> finger drag from a one finger drag, for example.

True.

> You have been a proponent of trackpad position then scroll then position
> based on the number of fingers. I believe this comes from how OS X does
> things. I think there are a few reasons why OS X can handle this, and
> why it doesn't fit in our system as it is:

Actually, it is not confined to OSX at all, but is a very common
behavior for anyone with two-finger scrolling enabled on their
machine, which amounts to millions of users.

> 1. We have been of the mindset that a touch stream from X must only end
> when the touch has physically ended. Another way of stating this is that
> any client using the touch stream events must either use the entire
> physical stream or none of the stream.

I think we need to turn this argument upside down - the user is right,
the technology is wrong. ;-)

> 2. OS X performs two finger scroll recognition on the server side. We
> have been of the mind that all multitouch gestures should be handled
> client side.
> 
> Because of 1, we cannot start moving the pointer with a touch stream,
> and then end the touch stream when a second touch begins and we start
> performing a scroll. I believe OS X skirts 1 by checking if the window
> under the cursor wants touch events, and if not it will send scroll
> events as appropriate. In essence, I believe OS X is special casing
> scrolling.
> 
> Maybe we should be special casing scrolling too, but I'm not quite
> convinced. I would rather have the window system be "dumb" and handle
> things like scrolling on the client side. This would prevent lock-in to
> a poor system only fully understood in the future (just like we have
> with scroll buttons in X today). However, it may mean that we lose the
> ability to have position then scroll then position based on the number
> of fingers, as OS X does.

I think special-casing this makes a lot of sense, considering pointer
movement can be clearly identified as input focus modification, which
is conceptually different from touch drags. Mark's suggestion will
most likely work satisfactorily.

Thanks,
Henrik

_______________________________________________
Mailing list: https://launchpad.net/~multi-touch-dev
Post to     : multi-touch-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~multi-touch-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to