so,
the problem is that all you see on screen consists from shapes, which
are drawn by Athens.
Now, since drawings can be quite complex and include various transformations,
it will be hard to track them manually to use for mouse (or fat
finger) pointer hits.

A simplest example:
 i wanna draw a rectangle, rotated by 45 degrees.
and i wanna change it's color when mouse is over it.

If you look at the situation from global coordinate system
perspective, in order to find whether some morph contains given point
or not,
you have to iterate over all it's parent morphs (because they also can
have coordinate system transformations).

But in a local coordinate system, things remain extremely simple: to
test whether a point is inside of rectangle (most morphs usually is)
is a piece of cake, isnt?

I think i found an interesting approach for a coordinate system
feedback, (so it can be used for 'hit tests' as well as many other
things)

So, lets consider a simple draw method:

Morph>>drawOnAthensCanvas: aCanvas
   aCanvas
         setPaint: Color red;
         drawShape: self bounds

now if we  add a single line here:

   self registerShapeForEvents: self bounds canvas: aCanvas.

Things getting very interesting:

what this method should do, is to simply capture enough state from
current canvas (like transformation matrix, clipping
and the shape , of course), so later it can be used to test whether
some point (like mouse cursor position) is over it or not.

Why in drawing method , you might ask? Where else, i would answer.. A
parent morphs branch can impose own clipping and transformation,
which you can easily observe during drawing, but it becomes tedious
once you leave outside of it:
to get same amount of information you'll have to manually go over all
morph's hierarchy and apply same transformation/clipping calculations
once you get to the morph under the question.

The drawing method responsible for visual representation of morph on
display media, which means that it actually defines the shape(s) with
which users will interact.

So, by drawing a bunch of morphs on desktop using such approach, we
can automatically capture all the geometry (as well as its relative
hierarchy), which needs to be tested for mouse hits (or receiving a
mouse events in general).
And the hit test , which we need to perform every mouse move, boils
down to simply iterating over that hierarchical list and testing the
current hand position against each element.

The morphs, which are not interested in receiving a mouse events (like
hidden or parent morphs of various kinds), apparently can simply avoid
publishing anything into that list,
 instead of implementing numerous #handlesMouseDown/#handlesMouseOver etcc...
and like that saving cycles for being tested every time you touching
the mouse, because some of their submorphs are actually interested in
receiving such events.

And of course a most interesting aspect of this approach is handling a
complex geometry for morphs.
Because with rectangles it is fairly simple to imagine a visual
feedback system which based solely on morph's bounds (like the one
which we currently using), but for morphs with complex geometry,
things can become quite tedious.

-- 
Best regards,
Igor Stasenko.

Reply via email to