> Date:         Fri, 26 Oct 2001 04:37:42 -0500
> From: Alex Terrazas <[EMAIL PROTECTED]>
>
> I am trying to unpack the whole view model thing...
>
> One question that I have is how does the view model
> interact with the rendering loop.  From previous
> emails, it was confirmed by Mark Hood that the
> best way to do head tracking is through the PhyscialBody
> setHeadIndex() method because it is in an optimized
> loop.  Could we get a little more info on what that
> loop is all about?

The head tracking sensor is special: it is the only one that Java 3D reads
directly in its rendering loop.  All other sensors are read by application or
utility behaviors, which go through a separate scheduling mechanism.  In the
current implementation this can produce a slight frame update lag.  This is
especially bad for head tracking since with HMD devices such lags can literally
make the user dizzy or nauseous.  The effect is less onerous with fixed screen
displays, but still highly undesirable.

>From your previous emails, I think you might be wondering why you couldn't just
chuck the details of the view model and drive a view platform behavior directly
with your head tracking sensor despite the potential of an annoying frame lag.
This will only work to a limited extent: the view will respond to your head
movements, but if you are using the default window eyepoint policy of
RELATIVE_TO_FIELD_OF_VIEW, you have a static projection frustum and the view
will never look quite right, especially with stereo displays.

Head tracking in a fixed-screen environment is all about dynamically adjusting
the projection frustum for a screen in response to head movements.  This is
where the fixed image plane of the camera view model fails.  In a lower-level
API such as OpenGL you deal with this by recomputing the projection matrix
explicitly for each eye, each display surface, and each new eye position.

This works, and I've implemented such a system along with anybody else who's
into VR, but clearly what is desirable is a view model that lets you say: here
is my screen and how it's positioned relative to other screens (if any), here
are my eyes relative to the space in which the screens are defined, and here is
where the whole physical configuration should be in the virtual world; now go
and render the appropriate images please.

In Java 3D you specify the screen (image plate) positions relative to the
tracker base, define a coexistence coordinate system containing the screens,
tracker base, and eye positions, and then specify the location of the view
platform origin in coexistence coordinates through the view attach policy.
(You can think of coexistence coordinates as physical world coordinates, but
since there other physical coordinates systems such as tracker coordinates and
image plate coordinates it is given a name that indicates its function: a space
in which all these coordinate systems, including the virtual, coexist).

The basis vectors of the view platform are always aligned with coexistence, and
the scaling between the virtual and physical worlds is specified through the
screen scale, so this establishes the complete mapping of the physical world to
the virtual world.  The projection of the virtual world onto the available
display surfaces is then automatic.

> Also, regarding Sensor, I don't see much info on prediction.
> How is prediction implemented?  What is the
> algorithm for head or hand prediction?

Unfortunately I can't say anything about the implementation because of IP
issues.  This is also an area that is currently under development.  Suffice it
to say that you don't have anything to lose by specifying these prediction
policies for your sensors, and potential performance improvements if you do.

> Finally, and this one is super basic, what is the HotSpot?
> The docs say:
>
> setHotSpot()
> "Set the sensor's hotspot in this sensor's coordinate system."
>
> I still don't quite get that--Is that a point in space where the
> sensor is active or a reference point?

It's just like the hotspot for a 2D mouse cursor: it specifies the active
location which actions like picking or pointing use.  You often want to produce
a visible echo of a sensor in the virtual world, like an arrow maybe, but for a
pick operation you need to define a point relative to that echo which
establishes exactly what is to be picked.

Currently the hotspot semantics are broken in the 1.2.1 release.  They are
fixed in the upcoming 1.3 beta.  The next release also provides a new utility
called ConfiguredUniverse, including an example program and sample
configuration files that may help you in setting up your head tracking
application.

-- Mark Hood

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA3D-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to