Howzit Ian?

Thanks for your response.

> I'm not the most qualified to answer this, but I think most of the more
> qualified people are pretty busy adding some of these features. :)

Noted. But if yours is the only response and / or there are no differing answers guess 
what, you will become correct (most qualified) by default. <g> 

What exactly do you mean? seems to have a suspiciously high corelation with the 
marketing blurb features. <g> 
In other words it's from ATI's brochures on their various cards.

I'll try and clarify.
 
> ATI R100 (Radeon)
> =================

> Mapping
> =======
> Bump --------------------------- No.  Will be possible once (if) the
>                                  extension is added to Mesa.  By this I am
>                                assuming you mean environment bumpmapping.
Yes, Environmental bumpmapping. 

> Emboss ------------------------- What exactly do you mean?  If you are
>                                  refering to Nvidia's NV_texgen_emboss
>                                extension, then it will likely never be
>                                supported due to Nvidia's IP.
It was in ATI's brochure, I grabbed it out of this brochure point:
* Emboss, Dot Product 3 and
  Environment bump mapping
(that's letter for letter, same layout - you decide please)

Please see this ATI page on how to do it in HW & with OpenGL
http://www.ati.com/developer/sdk/RadeonSDK/Html/Tutorials/RadeonBumpMap.html#EMBOSS

PC Paradox: 
(http://www.pcparadox.com/Editorials/BumpMapping/Bump2.shtml#emboss)

Emboss Mapping
The real name for emboss mapping is Multi-Pass Alpha 
Blended Bump Mapping. So as you can see, "emboss mapping" 
sorta stuck as the name. (and the acronym MPABBM really didn't seem 
to fit either :) Well there is a reason that emboss mapping has 
that weird funky name. The name is actually a great description 
of how this technique gets around the whole lighting problem I discussed 
on the previous page. But first I'd like to start off by saying 
that emboss mapping was the first method used to simulate bump mapping 
in real time, and thus was lacking in many ways. These small problems 
made emboss mapping look dullish and took an unnecessary amount 
of time for such a simple rendition of the effect. 
           
Ok, now emboss mapping achieves the bumpy effect by 
creating a monochrome version of the texture map being "bumpified" 
and then applying it to the polygon and shifting it slightly. To 
help you visualize this effect, think of a drop shadow effect, where 
lettering on a page has a black set of the same lettering offset 
just a little bit. Drop shadowing and emboss mapping are essentially 
the same. In emboss mapping once the monochrome version of the texture 
has been shifted, it is then cut and blended with the original and 
applied to the texture, giving it the bumpy effect. 
           
There are many limitations to emboss mapping, and 
here are a few. Emboss mapping only supports polygonal objects and 
can not be applied to a volumetric or multi-lit surfaces. Also Emboss 
mapping is limited to lighting coming from a certain angle (45 to 
-45 degrees). It can not handle more than one height of bumps because 
the bumping has to be uniform across the entire object. And most 
importantly, Emboss mapping can really slow down your CPU because 
of all the converting and FPU calculations it has to do to shift 
a texture perfectly.
           

> System Memory Blits ------------ What exactly do you mean?
I'll dig up a decent definition for you, it is however from DX.


> Superscalar Rendering ---------- What exactly do you mean?
Perhaps they describe the fact that the R100 renders as super-scalor rendering because 
it has two pipelines?

>From Lost Circuits:
(http://www.lostcircuits.com/video/atifury/3.shtml)
SuperScalar Rendering Engine
The RAGE 128 uses two graphics pipelines working in concert to process two pixels each 
clock cycle. This kind of parallelism is typical of a superscalar architecture. 
Consequently, the two RAGE 128 engines which render the scene in parallel, is referred 
to as a Super Scalar Rendering Engine. The speed of rendering is very close to twice 
that of single pipelined graphic chips.

> Twin Cache Architecture -------- What exactly do you mean?
>From PC Insights:
(http://www.pcinsight.com/reviews/aiw128/aiw1283.asp)
Twin Cache Architecture
Of all the 3D features of the Rage 128 chip, the Twin Cache Architecture seems to 
stand out the most because it is unique to the Rage 128. The Rage 128 uses an 8KB 
buffer to store texels that are used by the 3D texel engine. In order to improve 
performance even more though, ATI engineers have also incorporated a 8KB pixel cache 
used to write pixels back to the frame buffer.

>From Lost Circuits:
(http://www.lostcircuits.com/video/atifury/3.shtml)
Twin Cache Architecture
Like microprocessors, the on-chip cache in graphics chips is growing dramatically.  
The RAGE 128 has not only incorporated significantly more on-chip memory, it has 
expanded the role of the cache in achieving optimal performance.  Although the 
combined bandwidth of 128-bit memory and AGP bus significantly is vastly improved over 
prior generations of graphics chips, this alone is still insufficient to achieve the 
types of performance that games and other applications want to see.  

The concept in the RAGE 128 is to go beyond caching texels on the input side of the 3D 
engine.  To achieve maximum performance,  also cache pixels need being written back to 
the frame buffer.  As a result, now both the texels from the texture maps as they are 
read and pixels as they are written back to the frame buffer are cached .  Twin-Cache 
Architecture allows to efficiently use the cache resources  on chip to deliver the 
maximum performance in all situations.  The benefits of this cache are not limited to 
3D operations but also attain optimal 2D and video performance, achieving maximum 
benefits from the die area used for the pixel cache.

Both pure marketing blurb as far as I can see ie that is the way the HW was designed.
These were points they were literally all there was ie:
*Superscalar Rendering
*Twin Cache Architecture

> Texture
> =======
These were points that were literally all there was ie:
*Texture Cache
*Texture Compositing

> Cache -------------------------- What exactly do you mean?
A cache for textures in teh same way that a CPU has a data and and instruction cache. 
It appears to be part of the HW design, not really a driver issue AFAI can see.

> Compositing -------------------- What exactly do you mean?

>From ATI:
(http://www.ati.com/developer/ravesupt.html#TEXTCOMP)

Texture Compositing
Texture compositing is the ability to blend two textures together in a single pass 
operation. This functionality is
supported on the Rage Pro, but not Rage2 hardware. To use this feature, 
kATICompositing must be set to "true".
Then, kATISecond_Texture is used to specify a second map to be blended with the 
texture specified by the usual
kQATag_Texture. 
When texture compositing is enabled the ATI hardware can accept two independent sets 
of texture coordinates for
each vertex. The second set of coordinates indexes into the secondary texture map. 
Thus, when kATICompositing is
enabled, the ks_r, ks_g, ks_b fields of the TQAVTexture struct are interpreted by the 
ATI RAVE driver as secondary
uOverW, vOverW, and invW texture coordinates. This means that texture compositing 
overrides specular highlight
texture lighting. The ATI hardware can do both specular lighting and compositing at 
the same time. However, since
texture compositing is most likely to be used to achieve specular lighting effects, it 
seemed acceptable to overload
the use of the vertex specular color components in this way (if this causes any 
application serious trouble we could
consider changing this behavior). 

There are three different texture compositing modes that can be set using the 
kATICompositingFunc integer state
tag: blend, modulate, and add. In blend mode the two textures are combined based on 
the value of
kATICompositingFactor (0.0 - 1.0). A value of 0.0 specifies that the resulting pixel 
should be 100% of the primary
map added to 0/16ths of the secondary map; 0.5 specifies 7/16ths of the primary map 
and 8/16ths of the secondary
map; and 1.0 specifies 15/16 of the secondary map. To get 100% of the secondary map 
you must make it the primary
one. In modulate mode the resulting pixel is the first texture modulated by the second 
texture. In additive mode the
resulting pixel is the first texture (after lighting) plus the second texture. 

Bilinear texture filtering for the secondary texture can be enabled and disabled using 
the kATISecondTexMin,
kATISecondTexMag integer state tags. Finally, kATICompositingAlpha can be used to 
specify that the alpha values
from the second texture should be used as the compositing factor in blend mode. Note 
that in this mode the upper 4
bits of the texture alpha is used, and a factor of 0xF indicates that none of the 
primary texture should be used.

There is one restriction on secondary textures: they must have the same texel size as 
the primary that they are being
blended with. In other words a kQAPixel_ARGB16 texture can be blended with a 
kQAPixel_ARGB16 , a
kQAPixel_RGB16 or a kQATIPixel_ARGB4444 texture, since all these have texel that are 
16 bits. 

Here is example code to enable texture compositing. The following commands can be 
given in any order:
...
int CompFunc = kQATIComposeFunctionBlend;

QASetInt( cntx, (TQATagInt)kATICompositing, true );
QASetInt( cntx, (TQATagInt)kATICompositingFunc, CompFunc);
QASetFloat( cntx, (TQATagFloat)kATICompositingFactor,0.5);
QASetInt( cntx, (TQATagInt)kATISecondTexMin, false );
QASetInt( cntx, (TQATagInt)kATISecondTexMag, true );
QASetInt( cntx, (TQATagInt)kATICompositingAlpha, false );

QASetPtr( cntx, kQATag_Texture, pTex );
QASetPtr( cntx, (TQATagPtr)kATISecond_Texture, pSecTex );
...

If I google it I come up with:
aka Splatting by Charles Bloom

Level of Detail for 3D Graphics
... Game-specific difficulties with LOD. Modeling Practices.
Vertex Representation. Texture Compositing. ...
lodbook.com/toc/ - 40k - Cached - Similar pages
Which looks as though it may be an intersting book fro DRI developers (note only seen 
the contents page)


> Effects?
> ========
These seem to be a list of effects that the card is able to do.

eg Supports 3D textures for volumetric effects , such as fog or dynamic lighting, e.g. 
fire burning in the fireplace


> Fog Effects -------------------- What exactly do you mean?

>From ATI:
(http://www.ati.com/developer/ravesupt.html#FOG)

Fog
Support for "fog" effects in the ATI RAVE driver is modeled 
after fog in OpenGL. If enable, fog blends a pre-defined 
fog color with a pixel's post-texturing color using a 
blending factor f. This factor is computed according to 
one of four equations:

f = a               - fog factor is taken from vertex alpha
f = (e - z)/(e - s) - fog factor diminishes linearly
f = exp(-d * z)     - fog factor decays exponentially
f = exp(-(d * z)2)  - fog factor decays exponentially squared

where:

a,    is the vertex alpha (0.0 - 1.0). Note that similarly to alpha blending, values 
of 'a' close to 0.0 result in the object becoming more "transparent", and the fog 
color becoming more dominant. So a = 0.0 denotes maximum fog; this may be somewhat 
counter-intuitive.

z,    is the eye-coordinate distance from the eye. For fogging, eye coordinate z is 
computed by 1.0/invW. This reproduces the eye-coordinate z value before the 
homogeneous perspective divide. For exponential fog (Exp, Exp2), the z coordinate is 
normalized 0.0 -> e, and then clamped 0.0 <= z <= 8.0.
In the simplest case, these eye coordinates may not be normalized and will have the 
same range as your models have. For example 0.0 to 2000.0 would not be uncommon. 
Alternately, these z values may be normalized to 0.0 to 1.0. In either case, the eye 
coordinate z values will be scaled by the fog end value, before computing the fog 
factor, so the fog end can be used to control how the z values are normalized.

s,   is the fog start value. This should be >= 0.0
e,   is the fog end value. This should be >= 1.0. This value is used to normalize the 
eye coordinate z values, so it should be in the same range as those z values. For 
example if your eye coordinate z values range from 0.0 to 2000.0, you could set your 
fog end to 4000.0., indicating that at a distance of 4000.0 objects should be 
completely lost in the fog. NOTE: 'e' should not equal 's'

d,   is fog density factor. A value of 2.0 causes the original color to decay, and the 
fog to build up twice as fast; a value of 0.5 would cause the fog to build up half as 
fast. This is only used in the exponential fog equations.

To select one of those equations:

          QASetInt(context, (TQATagInt)kATIFogMode, fog_mode );
where "fog_mode" is one of the following enumerated types:
          typedef enum {
                    kQATIFogDisable     = 0,
                    kQATIFogExp = 1,
                    kQATIFogExp2        = 2,
                    kQATIFogAlpha       = 3,
                    kQATIFogLinear      = 4
          } TQATIFogMode;
Here is example code to set the fog mode to kQATIFogLinear. The following commands can 
be given in any order:
...
QASetInt( DrawContext,    (TQATagInt)kATIFogMode, kQATIFogLinear );
QASetFloat( DrawContext, (TQATagFloat)kATIFogColor_r, 1.0 ); // 0.0 - 1.0
QASetFloat( DrawContext, (TQATagFloat)kATIFogColor_g, 1.0 ); // 0.0 - 1.0
QASetFloat( DrawContext, (TQATagFloat)kATIFogColor_b, 1.0 ); // 0.0 - 1.0
QASetFloat( DrawContext, (TQATagFloat)kATIFogDensity, 0.5 ); // >= 0.0
QASetFloat( DrawContext, (TQATagFloat)kATIFogStart,      0.0 ); // >= 0.0
QASetFloat( DrawContext, (TQATagFloat)kATIFogEnd,        2000.0 ); // >= 1.0
...


> Texture Lighting --------------- No, but I'm not 100% sure.


> RadClk.exe 
                  bios.bin 268 26 ------------------------ If you mean 
{SGIX,ARB}_depth_texture,
>                                  then no.
Rage3D
(http://www.rage3d.com/articles/specs/radeon.shtml)

Hardware support for 3D shadows
Shadowing for each separate light source with the help of a special Priority Buffer


> Spotlights --------------------- What exactly do you mean?
I'm thinking light sources as in 3D rendering programs (Maya).

>From Rage3D
8 source hardware lighting for the whole scene (directional aka infinite and point 
Lights aka local)

> Texture Morphing --------------- What exactly do you mean?
I can only define it by what it sounds like, take two textures and morph from one to 
the other.

These "Effects" were in ATI's brochure, I grabbed them out of this brochure point:
* Fog effects, texture lighting, video
  textures, reflections, shadows,
  spotlights, LOD biasing and
  texture morphing
(that's letter for letter, same layout - you decide please)
 

 
> Video Features
> ==============
> Adaptive De-interlacing - See the GATOS project @ http://gatos.sourceforge.net/
> Motion compensation ----- See the GATOS project
> IDCT (sp?) -------------- See the GATOS project
Yeah I kind of had an inkling that the Gatos project was the place to look, the only 
reason this is in here is because its part of the card, not because it has anything to 
do with DRI.


> Driver Optimisations
> ====================
> 3DNow! - Yes
> SSE ---- Yes
> SSE2 --- No

Oh well suits me fine that's all my CPU supports anyway.

I should probably add:
Pixel shader ------------------------
Programmable texture blending modes -
Projective Textures------------------

Cheers
Liam


_______________________________________________
Dri-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/dri-devel

Reply via email to