Hi Simon,

The OSG by default uses double Matrices (osg::Matrix is typedef'd to
osg::Matrixd) for all internal transforms and camera matrices.  The
matrices are all accumulated in doubles and passed to OpenGL as doubles.
Most OpenGL drivers will then cast the matrices down to floats when passing
down to the GPU, but as the OSG passes a fully accumulated modelview matrix
the precision is at least the best you can get.

To best handle scenes with large extents one breaks the scene into regional
tiles, each of which has a local origin and then a MatrixTransform above
the tile subgraph to place it in it's final world coordinates.  The OSG is
used widely in the GIS market with many users handling whole earth database
without precision issues.

So... for you as long as you haven't deliberately compiled the OSG to use
osg::Matrix and a osg::Matrixf then you'll be uses doubles.  How you manage
your scene graph and internal transforms will be the key, get it right as I
suggest above then you shouldn't have issues.

Robert.


On 16 September 2013 17:48, Voelcker, Simon <
[email protected]> wrote:

> Hi,
>
> I am using MatrixTransform nodes with matrices that contain quite large
> numbers. When I stack two of these transforms, I notice severe floating
> point inaccuracies in the scene, although I use double precision matrices
> to set up my transform nodes. Is it possible that OSG uses only single
> precision here (internally), and if so, how can I force it to use double
> precision?
>
> I know I could multiply my matrices directly and use a single
> MatrixTransform node, but I want to avoid this since it breaks my
> architecture.
>
> Thanks in advance,
> Simon
> _______________________________________________
> osg-users mailing list
> [email protected]
> http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
>
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to