Hi,
 
In my application, I am evaluating LOS (line of sight) for a terrain from a given view position.  For those familiar with the topic, I am basing it on the Shadow Mapping algorithm described in the literature.  Briefly, you take a virtual world postion, convert it to the light's coordinate system, and compare the z value of the transformed position with the associted depth buffer value.  If the depth buffer value is less than the transformed z pos, then the position is in shadow.
 
My approach is to use is the z value I get when I transform the virtual world position to image plate coordinates using getVworldToImagePlate() to compare against the depth buffer.  I find the depth buffer value by using getPixelLocationFromImagePlate(imagePt, pixel) to map the transformed virtual world position to its projected screen pixel. I use this as an index into the depthbuffer to find out the z value stored at this pixel location.  I scale the value stored in the depth buffer to account for the front and back clip planes as well.  
 
I think I am mixing coordinate systems or not taking something into account since the z value of the VworldToImagePlate result does not obviously relate to the value I get from the depth buffer.  I'm not sure how to interpret an image plate coordinate z value, does it contain depth information from the eye position as I assumed?  If so, how do I scale it? Is this even what I want? Also, the depth buffer values I get don't match up with the actual virtual world distances from a point to the view position.  The depth buffer values were typically twice as far as the actual virtual world distances.  This didn't make much sense to me.
 
I am using defaults for the view parameters, an FOV of 60 degrees, front & back clipping planes of 1,100 (tried VIRTUAL_EYE, PHYSICAL_EYE clipping policies etc. without much luck) and a 400x300 canvas.    Thanks for any insight into this.
 
-Pedro

Reply via email to