something.
There's no need to apologize, is there? Discussions in the open like this
are done for exactly this reason, to invite input from others.
--
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway
On 03/01/2010 01:41 PM, ext Daniel Stone wrote:
On Mon, Mar 01, 2010 at 12:42:40PM +0100, Bradley T. Hughes wrote:
On 03/01/2010 12:22 PM, ext Daniel Stone wrote:
and so on, and so forth ... would this be useful enough to let you take
multi-device rather than some unpredictable hybrid
bringing up a terminal and/or code editor just for
themselves to try out an idea that they get while in a meeting).
(Sorry for the lack of diagrams, my ascii-art kung-fu is non-existent. How
about a video? http://vimeo.com/4990545)
--
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
On 03/01/2010 03:34 PM, ext Daniel Stone wrote:
Hi,
On Mon, Mar 01, 2010 at 02:56:57PM +0100, Bradley T. Hughes wrote:
On 03/01/2010 01:55 PM, ext Daniel Stone wrote:
I don't really see the conceptual difference between multiple devices
and multiple axes on a single device beyond the ability
, which actually does give on-screen locations (iirc).
The multi-focii discussions really only apply to multi-touch capable touch
screens, not to laptop track-pads or external multi-touch capable tablets
like some (all?) of the Wacom Bamboo tablets.
--
Bradley T. Hughes (Nokia-D-Qt/Oslo
03:34 PM, ext Daniel Stone wrote:
On Mon, Mar 01, 2010 at 02:56:57PM +0100, Bradley T. Hughes wrote:
This is where the context confusion comes in. How do we know what the
user(s) is/are trying to do solely based on a set of x/y/z/w/h
coordinates? In some cases, a single device with multiple axes
to grab the entire
multi-touch device? hm...
Oh, and did I mention that we have to be compatible to the core protocol
grab semantics for mouse emulation?
You did now :)
--
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway
the
multiple-user and multiple-client use case the best.
This is the main roadblock at the moment, and anytime I try to come up with
a working solution I hit a wall at even quite basic use-cases.
Would you mind elaborating a bit on this?
--
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes
, which is supposed to be both a
touchscreen and a tablet. The device isn't working 100% in Linux yet, but it
should be possible to (eventually) detect events coming from the pen vs.
from a finger. But again, how would one pass this to the client?
--
Bradley T. Hughes (Nokia-D-Qt/Oslo
On 01/19/2010 10:37 AM, ext Peter Hutterer wrote:
On Tue, Jan 19, 2010 at 10:04:53AM +0100, Bradley T. Hughes wrote:
On 01/19/2010 09:43 AM, ext Peter Hutterer wrote:
given the easy case of a single user interacting with a surface:
with Qt as it is now, if you get all the coordinates
On 01/18/2010 04:33 AM, ext Peter Hutterer wrote:
On Thu, Jan 14, 2010 at 11:23:23AM +0100, Bradley T. Hughes wrote:
On 01/12/2010 12:03 PM, ext Peter Hutterer wrote:
So, first question: is my behavior the good one? (not being
compliant with Windows or MacOS)
Short answer - no. Long answer
changes. In particular, being able to do collaborative
work on a large multi-touch surface requires the driver (or server, not sure
which makes most sense) to be able to somehow split the touch points between
multiple windows. This is something that we had to do in Qt at least.
--
Bradley T
an interesting discussion here.
Agreed :P
--
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway
___
xorg-devel mailing list
xorg-devel@lists.x.org
http://lists.x.org/mailman/listinfo
13 matches
Mail list logo