> Date:         Mon, 19 Aug 2002 15:57:57 -0600
> From: Etienne Rossignon <[EMAIL PROTECTED]>
>
> I have been looking with great interest at the publication in SIGGRAPH 1995
> about compression of normal.
>
> Deering, Michael. "Geometry Compression." Computer Graphics Proceedings,
> Annual Conference Series, 1995, ACM SIGGRAPH, pp 13-19.  I have been also
> looking at [...]  that mentioned the algorithm to compress a normal vector
> into a 17bit word.
>
> However , I have a problem understanding how the 11 bits could be generated
> from the normalize Tetha and Phi parameters ( so called u and v)

The 17-bit normal is a theoretical limit based on empirical observations
that indicate only about 100,000 distinct normals are needed to render a
scene that is human-indistinguishable from a scene generated from the usual
96-bit normals.  Deering's normal encoding exploits a 48-fold spherical
symmetry that means those normals can be represented with 3 bits indicating
an octant, another 3 bits for a sextant within the octant, and 11 bits
(2048) of table lookup.  This provides 98,304 normals.

However, it is desirable for the normal encoding to support delta-encodings
and scalable density distributions.  So instead of a monolithic 2048-entry
normal table, a table that is addressed with 2 6-bit (u, v) parameters is
used.  This pushes the maximum normal length to 18 bits, but gives us the
advantage of being enable to encode consecutive normals as (u, v) deltas
from previous normals, and allows us to use the same normal table to lookup
normals at smaller quantizations than 6 bits per (u, v) component.

So the 17 vs. 18 bit normal is an implementation issue.  We can achieve
better compression on average with the 18-bit normal encoding than we can
with the 17-bit encoding since the former allows delta-encodings and
variable quantization levels.

Sun used to sell a graphics accelerator that decompressed geometry in
hardware (the Elite 3D rev 2), but that line has been EOL'ed and Sun has no
plans to implement hardware geometry decompression on future products.  It's
too bad, but our major CAD/CAM customers at the time weren't able to exploit
the advantages of hardware decompression due to the continuous editing
nature of the applications and the overhead of the compression process.
It's actually much more suitable for the pre-generated models that Java 3D
applications tend to use.

Anyway, the point of the digression is that the compressed geometry encoding
was totally oriented toward hardware decompressor implementations, which was
why it was important to have a small normal table that could be stored with
on-board ROM, and why all the command header/body shuffling is required in
the specification.  As you noted, that hardware ROM table needs a little
more than 2048 entries, so there is special logic to handle the limit cases.
In our software Java 3D and OpenGL implementations we actually use a 65 x 65
entry normal table just because it's simpler.

> Can someone describe the method implemented in Java3D to get ride of the
> 30 values ?

We only get rid of 14 special normals -- the 6 axis-aligned normals and 8
mid-octant normals.  Since there are 2 entries unused in the 3-bit sextant
encoding, we use a special encoding in the sextant/octant fields to flag
these normals.

> Can someone provide me with some information how to build up the index
> table.

The source code is available in
com.sun.j3d.utils.compression.CompressionStreamNormal.java.

-- Mark Hood

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA3D-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to