Your problem seems to stem from having to repeat vertices multiple times so
that each can have its own vertex. For example, let's say you need to repeat
each vertex 4 times because it is shared by 4 different facets. This means
you have xyz vert/norm (24 bytes) * 4 * 20M = ~2GB. Do I understand the
issue correctly?

You don't want to specify the normal once per facet because that will send
you down the OpenGL slow path (setting normal binding to
BIND_PER_PRIMITIVE). But this would be the easiest change and fastest course
of action, and with display lists on, this could still produce acceptable
performance.

There are normal compression schemes. I patented one while at HP that is
lossy but allows you to specify a variable amount of compression. In our
work, we found that a 6:1 compression ration (store the normal as a 2-byte
short) still produced pretty good visual results. If you search for my name
in any patent search engine you can find more info. This algorithm was
storage-efficient but not computationally efficient.

For a computationally efficient algorithm, try representing the xyz
components as 10 bit signed ints stored in a single word and then unpack
them in a shader.

Special data types allow other types of normal compression. For example,
bump maps, or normals for heightfield data, only need two components. The
third can always be derived.

Paul Martz
Skew Matrix Software LLC
http://www.skew-matrix.com
+1 303 859 9466

_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to