On Wed, Jul 31, 2013 at 1:17 PM, Johannes Scholz
<[email protected]> wrote:
> Hi,
>
> running out of tracking volume is just a matter of correct positioning and
> training. I've made a small video for demonstration.
>
> http://www.youtube.com/watch?v=0wBiiKo_hys
Thanks for the video, it helped me to understand how to use it and it
is, indeed, a bit better. You should link the video on the website for
reference.
I did play with it again, but I have observed some issues with it:
I find it still way too fiddly for my taste - the rotations are fairly
hard to do consistently. It is easy to perform a rotation, but to
perform one that actually puts the object into the orientation I want
(instead of some semi-random one) is hard. I didn't realize that one
has to spread out the fingers and not to hold them together in order
for the gesture to be detected correctly.
Also, I am not sure that emulating 2D multi-touch gestures for driving
an orbit manipulator/trackball is the best metaphor here - the
manipulator rotates in 3 DOFs, but the two handed rotation gesture is
meant to rotate around a single axis only - the normal to the plane in
which the user has their hands (table or phone screen, usually).
I think a direct interaction ("grab the object with your hand and
rotate it") would work a lot better here - kinda like a virtual data
glove. That would also make it easier to stay in the workspace -
keeping both hands together while doing the manipulation is not really
intuitive (one is in the way of the other during the manipulation),
but when you bring them apart, you lose the tracking. One hand at a
time would probably work better.
What do you think?
Regards,
Jan
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org