See RT-34945 <https://javafx-jira.kenai.com/browse/RT-34945> for a good
example to the case that touch and mouse event should behave differently
On 12/15/2013 05:43 PM, Assaf Yavnai wrote:
I will summarize my answers here, and not inline, I hope I will touch
all your points.
Let me start my answer with a different perspective.
I think that it is correct to try and make mouse oriented application
to work under touch screens, but its not the same as looking how UI
should behave with touch screens.
To explain myself, here an example tested on the iPhone. It doesn't
mean that we need to do the same, but rather a pointer for checking
how UI can behave under touch.
In iPhone, when you press on a control, say send button, and you start
dragging it away the button will remain 'armed' for quite large
distance, even if you drag it on top of other controls. Only after a
large distance is passed, the button is been deactivated and the
operation is canceled, this is true a cross the platform.
What I'm trying to say that we first need to define how we would like
touch behave from UI/UX point of view and only then try to resolve the
After saying that its very important to note that JFX is cross
platform and device specific as smartphones are for example. So I
- mouse events should act as mouse events (regardless to touch)
- touch events should act as touch events (regardless to mouse)
- mouse applications that run on a touch device should have 80/20
functionality working i.e usable but not perfect (again the way the
application behaved and designed could not overlap in 100%, like small
- a question how we want touch application work on mouse platform
(migrating embedded application to desktop for example)
UI should behave differently on touch platforms and mouse platform, or
more accurately when events derived from touch device or a pointer
device. And this (mainly) is currently not supported.
I would like to suggest the following, hoping its feasible for the
next release (8u20 or 9)
1) define UI/UX requirements for touch devices
2) check overlaps and unique behavior between mouse and touch behaviors
3) suggest 3 UI paths 1) mouse based UI 2) touch based UI 3) common
4) discuss and define technical approach
We might end up in a solution very similar to what we have now, or as
you said, something completely new. The challenge is to come to it
with 'empty minds' (that why it would be best if UX engineer will
define it and not us)
Further more, I think that solutions like "Glass generates a
MOUSE_EXITED event any time all touches are released" should be
implemented in the shared code in not platform specific point, for
obvious reasons as this example provide.
I apologize if it was hinted that the technical chalnges are not
imoprtent or not challenging, I meant that it isn't the time to tackle
them (top down approach vs bottom-up)
I believe that we both want to to deliver a top notch platform and not
one that works most of the time, the difference between us, I think,
is that I focus on the differences and you on the commonalities.
Maybe it would be best if we assemble a team to discuss those issues
by phone instead of mails, what do you think?
Hope its clearer.
On 12/12/2013 10:30 PM, Pavel Safrata wrote:
please see my comments inline.
On 20.11.2013 16:30, Assaf Yavnai wrote:
I think that this is a very good example why touch events should be
processed separately from mouse events.
For example, if you will press a button with a touch it will remain
in "hover" state although you released the finger from the screen.
This is done because the "hover" state listen to the mouse
coordinates, which is invalid for touch.
Touch events doesn't have the concept of move, only drag.
My initial feeling would be for the synthesized mouse events to
behave similarly to touch events - when touching the screen you want
the application respond certain way regardless of the events used. Do
Specifically, the hover problem can be solved quite easily. On iOS,
Glass generates a MOUSE_EXITED event any time all touches are
released, which clears the hover state of everything. I suppose all
platforms can do that (for synthesized mouse events).
As I mentioned before, from my point of view, the goal of this
thread is to understand and map the difference and expected behavior
between touch events and mouse events and have separate behavior
depends of the type of events (Nodes, Controls and application
level). One of them is the picking mechanism, the hover is another.
I think the discussion of how to implement it is currently in lower
priority, as it is only a technical detail.
OK. Regarding picking, I believe the logical algorithm "pick each
pixel and find the touch-sensitive one closest to the center if
there's any" is the most natural behavior (just let me note that
implementation is not really a "technical detail" because right now I
don't see any way to implement it, so we might need to do something
different). I already covered hover state. Another major thing on my
mind is that scrollable content needs to become pannable (right now
an attempt to scroll a list by panning in FX results in selecting an
item, or even expanding a tree item, this is just not acceptable).
Also I think that the synthesize mouse event should only be used for
making applications that doesn't listen to touch event usable (in
most cases), but we can't expect them to work 1:1. (touch using a
finger is a very different device then a mouse of stylus device).
Touch applications supposed to be 'tailor made' for touch this
include different UI layout, different UX and of course different
logic (because the events and there meaning are different then
mouse) Currently the is no reason for an application to listen to
touch events, only if you need to support multi-touch, as the events
The touch events were added exactly to support multi-touch. The
"look" is one thing - yes, you'd often use different skins. But I
believe the behavior of the events can be made such that a majority
of application can just use mouse events and their application will
"behave" well with single-touch out of the box, which can save a lot
of development effort on user's side.
As the mouse events are synthesize in native layer, I think its
important to know the origin of the event. Maybe the handlers should
check if the event is synthetic and treat it as touch event in this
case. We can also double check it by checking if there is currently
touch device connected, for example.
I agree, I think many handlers will need to take the isSynthesized()
flag into account.
On 11/19/2013 04:34 PM, Pavel Safrata wrote:
there is more to it than just listeners. For instance, every node
has its "hover" and "pressed" states that are maintained based on
the picking results and used by CSS. So I believe we can't ignore
anything during picking.
By the way, I suppose this fuzzy picking should happen also for
synthesized mouse events so that all the classic apps and standard
controls can benefit. If I'm right, we definitely can't restrict it
on touch-listening nodes.
On 18.11.2013 14:10, Assaf Yavnai wrote:
I have a question,
Would it be possible to search only the nodes that have been
registered for touch events notifications instead of the entire
tree? (if it's not already been done)
Of course if one choose to register the root node as the listener,
we will have to go all over the nodes, but it seems as bad
practice and I think its OK to have performance hit on that case
On 11/17/2013 04:09 PM, Daniel Blaukopf wrote:
I think we we do use CSS to configure feel as well as look - and
this is feel, not look - but I also don’t feel strongly about
whether this needs to be in CSS.
I like your idea of simply picking the closest touch sensitive
node that is within range. That puts the burden on the touch
event to describe what region it covers. On the touch screens we
are currently looking at, a region would be defined as an oval -
a combination of centre point, X diameter and Y diameter. However
the touch region can be any shape, so might need to be
represented as a Path.
Iterating over pixels just isn’t going to work though. If we have
a 300dpi display the touch region could be 150 pixels across and
have an area of nearly 18000 pixels. Instead we’d want a way to
ask a parent node, “In your node hierarchy, which of your nodes’
borders is closest to this region”. So we’d need to come up with
an efficient algorithm to answer this question. We’d only ask
this question for nodes with extended capture zone.
We could reasonably limit the algorithm to dealing with convex
shapes. Then we can consider an imaginary line L from the node
center point to the touch center point. The intersection of L
with the node perimeter is the closest point of contact. If this
point is also within the touch area then we have a potential
match. We iterate over all nearby nodes with extended capture
zone in order to find the best match.
This will then be O(n) in both time and space for n nearby nodes,
given constant time to find the intersection of L with the node
perimeter. This assumption will be true for rectangular, oval and
rounded rectangle nodes.
On Nov 15, 2013, at 11:09 PM, Pavel Safrata
let me start with a few comments.
"changing behavior based on which nodes have listeners on them"
- absolutely not. We have capturing, bubbling, hierarchical
event types, so we can't decide which nodes listen (in the
extreme case, scene can handle Event.ANY and perform actions on
the target node based on the event type).
"position does not fall in the boundaries of the node" - I don't
think it will be very harmful. Of course it's possible for users
to write handlers that will be affected, but I don't think
it happens often, it seems quite hard to invent such handler.
The delivery mechanism should be absolutely fine with it, we
have other cases like that (for instance, dragging can be
delivered to a node completely out of mouse position). Of course
picking a 3D node in its capture zone would mean useless
PickResult (texture coordinates etc.)
CSS-accessible vs. property-only - I don't have a strong
opinion. I agree it's rather "feel" than "look", on the other
hand I think there are such things already (scrollbar policy for
Now I'll bring another problem to the concept. Take the
situation from Daniel's original picture with two siblings
competing for the capture zones:
Put each of the red children to its own group - they are no
longer siblings, but the competition should still work.
The following may be a little wild, but anyway - have one of the
siblings with capture zone and the other one without it, the one
without it partly covering the one with it. Wouldn't it be great
if the capture zone was present around the visible part of the
node (reaching over the edge of the upper node)? I think it
would be really intuitive (fuzzy picking of what you see), but
it's getting pretty complicated.
From now on, I'll call the node with enabled capture zone
The only algorithm I can think of that would provide great
- Pick normally at the center. If the picked node is touch
sensitive, return it.
- Otherwise, run picking for each pixel in the touch area, find
the closest one belonging to a touch sensitive node and return
that node (if there is none, then of course return the node at
Obviously we can hardly do so many picking rounds. But it can be
- Perform the area picking in one pass, filling an array -
representing pixels - by the nodes picked on them
- Descend only when bounds intersect with the picking area
- Don't look farther from the center than the already found best
- Don't look at pixels with already picked node
- For many nodes (rectangular, circular, with pickOnBounds
etc.), instead of testing containment many times, we can quickly
tell the intersection with the picking area
- Perhaps also checking each nth pixel would be sufficient
This algorithm should be reasonably easy to code and very robust
(not suffering from various node-arrangement corner-cases), but
I'm still not sure about the performance (depends mostly on
the capture zone size - 30-pixel zones may result in calling
contains() nearly thousand times which might kill it). But
perhaps (hopefully) it can be perfected. Right now I can't see
any other algorithm that would work well and would result in
more efficient implementation (the search for overlapping nodes
and closest borders etc. is going to be pretty complicated as
well, if it's even possible to make it work).
What do you think? Any better ideas?
On 13.11.2013 22:09, Daniel Blaukopf wrote:
Summarizing our face to face talk today:
I see that the case described by Pavel is indeed a problem and
agree with you that not every node needs to be a participant in
the competition for which grabs touch input. However I’m not
keen on the idea of changing behavior based on which nodes have
listeners on them. CSS seems like the place to do this (as I
think Pavel suggested earlier). In Pavel’s case, either:
- the upper child node has the CSS tag saying “enable
extended capture zone” and the lower child doesn’t: then the
upper child’s capture zone will extend over the lower child
- or both will have the CSS tag, in which case the upper
child’s capture zone would be competing with the lower child’s
capture zone. As in any other competition between capture zones
the nearest node should win. The effect would be the same as if
the regular matching rules were applied on the upper child. It
would also be the same as if only the lower child had an
extended capture zone. However, I’d consider this case to be
bad UI programming.
We agreed that “in a competition between capture zones, pick
the node whose border is nearest the touch point” was a
reasonable way to resolve things.
On Nov 13, 2013, at 12:31 PM, Seeon Birger
Your example of 'child over child' is an interesting case
which raises some design aspects of the desired picking
1. Which node to pick when one node has a 'strict
containership' over the touch center and the other node only
has a fuzzy containership (the position falls in the fuzzy area).
2. Accounting for z-order for extended capture zone area.
3. Accounting for parent-child relationship.
Referring to your 'child over child' example:
The conflict would arise were touch point center position
falls in the capture zone area of child2 but also clearly
falls in the strict bounds of child1.
Generally, when two control nodes compete on same touch event
(e.g. child1 & child2 in Daniel's diagram), it seems that we
would like to give priority to "strict containership" over
But in your case it's probably not the desired behavior.
Also note that in the general case there's almost always
exists come container/background node that strictly contains
the touch point, but it would probably be an ancestor of the
child node, so the usual parent-child relationship order will
give preference to the child.
One way out it is to honor the usual z-order for the extended
area of child2, so when a touch center hits the fuzzy area of
child2, then child2 would be picked.
But is not ideal for Daniel's example:
where the 2 nodes don't strictly overlap, but their capture
zones do. Preferring one child by z-order (which matches the
order of children in the parent) is not natural here. And we
might better choose the node which is "closer"
To the touch point.
So to summarize I suggest this rough picking algorithm:
1. Choose all uppermost nodes which are not transparent to
mouse events and contain the touch point center either
strictly or by their capture zone.
2. Remove all nodes that is strictly overlapped by another
node and is below that node by z-order.
3. Out of those left choose the "closest" node. (the concept
of "closet" should employ some calculation which might not be
trivial in the general case).
4. Once a node has been picked, we follow the usual node chain
list for event processing.
Care must be taken so we not break the current model for event
processing. For example, if a node is picked by its capture
zone, it means that the position does not fall in the
boundaries of the node, so existing event handling code that
relies on that would break. So I think the capture zone
feature should be selectively enabled for certain type of
nodes such buttons or other classic controls.
From: Pavel Safrata
Sent: Tuesday, November 12, 2013 1:11 PM
To: Daniel Blaukopf
Subject: Re: discussion about touch events
(Now my answer using external link)
this is quite similar to my idea described earlier. The major
difference is the "fair division of capture zones" among
siblings. It's an interesting idea, let's explore it. What
pops first is that children can also overlap. So I think it
would behave like this (green capture zones
Child in parent vs. Child over child:
..wouldn't it? From user's point of view this seems confusing,
both cases look the same but behave differently. Note that in
the case on the right, the parent may be still the same,
developer only adds a fancy background as a new child and
suddenly the red child can't be hit that easily. What do you
think? Is it an issue? Or would it not behave this way?
On 12.11.2013 12:06, Daniel Blaukopf wrote:
(My original message didn't get through to openjfx-dev
because I used
inline images. I've replaced those images with external links)
On Nov 11, 2013, at 11:30 PM, Pavel Safrata
What Seeon, Assaf and I discussed earlier was building some
On 11.11.2013 17:49, Tomas Mikula wrote:
On Mon, Nov 11, 2013 at 1:28 PM, Philipp Dörfler
<phdoerf...@gmail.com <mailto:phdoerf...@gmail.com>> wrote:
I see the need to be aware of the area that is covered by
What about picking the one that is closest to the center of
rather than just considering that area's center point.
I'd guess that this adds a new layer of complexity,
Say we have a button on some background and both the
the button do have an onClick listener attached. If you
button in a way that the touched area's center point is
the buttons boundaries - what event will be fired? Will
background and the button receive a click event? Or just
background or the button exclusively? Will there be a new
type which gets fired in case of such area-based taps?
My suggestion would therefore be to have an additional
event which gives precise information about diameter and
the tap. Besides that there should be some kind of
choosing which node's onClick will be called.
There is always something directly on the center of the touch
(possibly the scene background, but it can have event
That's what we pick right now.
into the node picker so that instead of each node capturing only
events directly on top of it:
Non-fuzzy picker: http://i.imgur.com/uszql8V.png
..nodes at each level of the hierarchy would capture events
their borders as well:
Fuzzy picker: http://i.imgur.com/ELWamYp.png
In the above, "Parent" would capture touch events within a
radius around it, as would its children "Child 1" and "Child
"Child 1" and "Child 2" are peers, they would have a sharp
between them, a watershed on either side of which events
would go to
one child node or the other. This would also apply if the
were further apart; they would divide the no-man's land
Of course this no-man's land would be part of "Parent" and
be touch-sensitive - but we won't consider "Parent" as an
until we have ruled out using one of its children's extended
The capture radius could either be a styleable property on
or could be determined by the X and Y size of a touch point as
reported by the touch screen. We'd still be reporting a touch
not a touch area. The touch target would be, as now, a single
This would get us more reliable touch capture at leaf nodes
node hierarchy at the expense of it being harder to tap the
background. This is likely to be a good trade-off.
Maybe the draw order / order in the scene graph / z buffer
might be sufficient to model what would happen in the real,
Am 11.11.2013 13:05 schrieb "Assaf Yavnai"
The ascii sketch looked fine on my screen before I sent
:( I hope the idea is clear from the text (now in the
its also look good)
On 11/11/2013 12:51 PM, Assaf Yavnai wrote:
I hope that I'm right about this, but it seems that
in glass are translated (and reported) as a single point
(x & y) without an area, like pointer events.
AFAIK, the controls response for touch events same as mouse
events (using the same pickers) and as a result a button
for example, will only triggered if the x & y of the
is within the control area.
This means that small controls, or even quite large
(like buttons with text) will often get missed because
although from a UX point of view it is strange as the user
clearly pressed on a node (the finger was clearly above
With current implementation its hard to use small
controls, like scrollbars in lists, and it almost
implement something like 'screen navigator' (the series
dots in the bottom of a smart phones screen which allow
jump directly to a 'far away'
To illustrate it consider the bellow low resolution
is the actual x,y reported, the ellipse is the finger
and the rectangle is the node.
With current implementation this type of tap will not
___/ __+_ \___ in this scenario the 'button'
will not get
| \ / |
|___\ ___ / __ |
If your smart phone support it, turn on the touch debugging
options in settings and see that each point translate to
large circle and what ever fall in it, or reasonably
close to it,
I want to start a discussion to understand if my
accurate and to understand what can be done, if any, for
coming release or the next one.
We might use recently opened RT-34136
com/browse/RT-34136> for logging this, or open a new
JIRA for it