On Wed, Sep 28, 2011 at 10:45 AM, Thiago Macieira <[email protected]> wrote: > Long ago, when we started talking about gestures, I had a dream about having a > semantic level above the raw events. Touch events, mouse events, keyboard > events, tablet events, etc., are raw: they convey what action the user did, > but not what the user meant to do. There should be an interpretation layer > that is capable of interpreting multiple event sources over time and give a > resulting semantic. For example, imagine the case of zooming: it can be a key > press (ZoomIn key), Ctrl+Mouse Button 4 (Ctrl+WheelUp) or a two-finger pinch.
I'd like that very much, too. I'm doing something like this already: If you're on KDE, fire up Palapeli (the jigsaw puzzle game) and look at the settings dialog. Similar to how KDE apps have a standard dialog to reconfigure QAction::shortcut()s, this has a dialog to reconfigure which mouse buttons trigger which actions. (Extra problem here is that multiple actions may have the same mouse button. The actual action is then chosen depending on context, e.g. whether the mouse cursor is hovering the puzzle table or a puzzle piece.) Greetings Stefan _______________________________________________ Qt5-feedback mailing list [email protected] http://lists.qt.nokia.com/mailman/listinfo/qt5-feedback
