(My original message didn't get through to openjfx-dev because I used inline images. I've replaced those images with external links)

On Nov 11, 2013, at 11:30 PM, Pavel Safrata <pavel.safr...@oracle.com <mailto:pavel.safr...@oracle.com>> wrote:

On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler <phdoerf...@gmail.com <mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by fingers rather
than just considering that area's center point.
I'd guess that this adds a new layer of complexity, though. For instance:
Say we have a button on some background and both the background and the
button do have an onClick listener attached. If you tap the button in a way that the touched area's center point is outside of the buttons boundaries - what event will be fired? Will both the background and the button receive a click event? Or just either the background or the button exclusively? Will there be a new event type which gets fired in case of such area-based taps?

My suggestion would therefore be to have an additional area tap event which
gives precise information about diameter and center of the tap. Besides
that there should be some kind of "priority" for choosing which node's
onClick will be called.
What about picking the one that is closest to the center of the touch?


There is always something directly on the center of the touch (possibly the scene background, but it can have event handlers too). That's what we pick right now.
Pavel

What Seeon, Assaf and I discussed earlier was building some fuzziness into the node picker so that instead of each node capturing only events directly on top of it:

Non-fuzzy picker: http://i.imgur.com/uszql8V.png

..nodes at each level of the hierarchy would capture events beyond their borders as well:

Fuzzy picker: http://i.imgur.com/ELWamYp.png

In the above, “Parent” would capture touch events within a certain radius around it, as would its children “Child 1” and “Child 2”. Since “Child 1” and “Child 2” are peers, they would have a sharp division between them, a watershed on either side of which events would go to one child node or the other. This would also apply if the peer nodes were further apart; they would divide the no-man’s land between them. Of course this no-man’s land would be part of “Parent” and could could be touch-sensitive - but we won’t consider “Parent” as an event target until we have ruled out using one of its children’s extended capture zones.

The capture radius could either be a styleable property on the nodes, or could be determined by the X and Y size of a touch point as reported by the touch screen. We’d still be reporting a touch point, not a touch area. The touch target would be, as now, a single node.

This would get us more reliable touch capture at leaf nodes of the node hierarchy at the expense of it being harder to tap the background. This is likely to be a good trade-off.

Daniel




Tomas

Maybe the draw order / order in the scene graph / z
buffer value might be sufficient to model what would happen in the real,
physical world.
Am 11.11.2013 13:05 schrieb "Assaf Yavnai" <assaf.yav...@oracle.com <mailto:assaf.yav...@oracle.com>>:

The ascii sketch looked fine on my screen before I sent the mail :( I hope
the idea is clear from the text
(now in the reply dialog its also look good)

Assaf
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:

Hi Guys,

I hope that I'm right about this, but it seems that touch events in glass are translated (and reported) as a single point events (x & y) without an
area, like pointer events.
AFAIK, the controls response for touch events same as mouse events (using the same pickers) and as a result a button press, for example, will only
triggered if the x & y of the touch event is within the control area.

This means that small controls, or even quite large controls (like
buttons with text) will often get missed because the 'strict' node picking, although from a UX point of view it is strange as the user clearly pressed
on a node (the finger was clearly above it) but nothing happens...

With current implementation its hard to use small features in controls, like scrollbars in lists, and it almost impossible to implement something like 'screen navigator' (the series of small dots in the bottom of a smart
phones screen which allow you to jump directly to a 'far away' screen)

To illustrate it consider the bellow low resolution sketch, where the "+" is the actual x,y reported, the ellipse is the finger touch area and the
rectangle is the node.
With current implementation this type of tap will not trigger the node
handlers

                __
              /     \
             /       \
       ___/ __+_ \___    in this scenario the 'button' will not get
pressed
       |    \         /    |
       |___\ ___ / __ |
              \___/

If your smart phone support it, turn on the touch debugging options in
settings and see that each point translate to a quite large circle and what
ever fall in it, or reasonably close to it, get picked.

I want to start a discussion to understand if my perspective is accurate and to understand what can be done, if any, for the coming release or the
next one.

We might use recently opened RT-34136 <https://javafx-jira.kenai.
com/browse/RT-34136> for logging this, or open a new JIRA for it

Thanks,
Assaf

Reply via email to