> Date:         Thu, 19 Jun 2003 18:17:36 -0400
> From: "Young, Jason" <[EMAIL PROTECTED]>
>
> This is a general graphics question but there really is a J3D aspect to it:
> Namely how does _JAVA3D_ convert the point??? This is something I'm
> investigating right now. Here is what I've come up with, take it with a
> grain of salt as I'm still working out the code. I'm eager to hear other
> people's responses, as I'm unsure of the sublteties myself. Anyway, here's
> what I do:

You could call Canvas3D.getVworldToImagePlate() and transform the vworld
point to a 3D image plate point.  Project it into an AWT pixel by calling
Canvas3D.getPixelLocationFromImagePlate().

If you have a scene with a dynamically changing view point, then you may
run into the problem of the Canvas3D methods returning information for
the previous frame instead of the current frame.  You can work around
this by using the methods in com.sun.j3d.utils.universe.ViewInfo
(available in Java 3D 1.3.1), which directly query the application scene
graph instead of the internal representations.  (The current Java 3D
architecture injects 1 or 2 frames of lag between the application state
and the internal representations).

Read the source code for ViewInfo if you want to understand how to do
this on your own.  You can get the source code for the utilities in the
Java 3D SDK release.  Although it was written to work around
synchronization issues, ViewInfo is essentially an alternative public
implementation of the Java 3D view model.  It computes the same
transforms as the private core classes, modulo any bugs.

For a complete understanding of the Java 3D view model, I recommend the
following: 1) read the official but nearly incomprehensible
documentation in the Java 3D Specification and class javadocs; 2) read
the unofficial, slightly less incomprehensible description of the view
model in the Java 3D Configuration File documentation (get there from
the class overview of ConfiguredUniverse and ConfigContainer); and 3)
read the ViewInfo source code to see how it all really works.

> - take the (x,y,z) point, transform it to (x',y',z') by the view and
> projection matrices (if in compatibility mode, this is easy, if not, well
> I'm working on that)

After transforming by the view and projection matrices and dividing by
W, you'll end up in 3D Euclidean clipping coordinates (ranging from
[-1.0 .. +1.0] in each dimension).  The X and Y dimensions are mapped
anisotropically to the width and height of the Canvas3D, so, knowing the
dimensions of the canvas, you can do simple scale/offsets of the
normalized X and Y clip coordinates, dropping the Z, to get your 2D
pixel.

You can use the inverse of the transform returned by
ViewInfo.getEyeToVworld() to get the view matrix, and
ViewInfo.getProjection() to get the projection transform from eye
coordinates to clipping coordinates.  You may want to modify
getProjection() to not perform the perspective depth scaling of Z since
you'll be dropping it anyway.

> - find the point of intersection between the line L and the plane P, where L
> is formed between (x',y',z') and (0,0,0), and P is the plane z -1

This is exactly what Canvas3D.getPixelLocationFromImagePlate() does,
except it uses the eye position in image plate coordinates instead of
(0, 0, 0), and projects to the image plate Z = 0 plane.

But you don't need to do this if you have the projection and view
matrices.

> I have code to do all this but my results are slightly off. I have a point
> that is rendered by J3D on the edge of the screen but by my calculations
> should be about 6 pixels below the edge therefore not rendered. There's a
> subtlety here that I'm missing. I think J3D does an additional small
> transform, but I have no way of knowing other than that my math and code are
> triple-checked but my results are still different from J3D's.

You may be having problems with the projection matrix.  In Java 3D it's
defined from the location of the eye in physical coordinates (meters)
and the location of the Canvas3D in physical coordinates.  With the
standard defaults the eye is centered in the canvas in X and Y, with a Z
offset that produces the current field of view across the width of the
canvas.  The projection frustum extends from the eye to the edges of the
canvas, with the fore and aft clip planes relative to the eye in meters
(although this can be changed with different policies).  The canvas
doesn't necessarily map to the location of either the fore or aft clip
plane.

The transform from vworld coordinates to eye coordinates goes from
virtual coordinates to physical coordinates, so you may be encountering
scaling problems as well.  The whole physical/virtual transform is
confusing to many people since Java 3D doesn't use a camera view model,
but instead computes views based on the positions of the eyes relative
to the canvases, in order to support the calibrations needed for virtual
reality applications.  Again, read the ViewInfo source code for more
understanding, and we can help you out with any more specific questions
you may have.

-- Mark Hood

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA3D-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to