Hi,

I guess it somehow optimizes the volumetric sampling. I have no idea what's going on 'under the hood'. Vesa..?

The idea is actually simple. Fog materials often require randomization of the sampled position - this reduces banding. Randimization is simply done as:

   sample_position = original_position + random_delta

Purely position related effects, such as shadows and illumination in general, automatically take the new position correctly into account. However, some effects may be defined using 'Map coords' channel which is computed before the randomization happens. In other words, the rendering pipeline proceeds as:

       Ray tracer: render engine computes position
           Material mapping: parallel map computes Map coords from position
               Material:
                       Turbidity is computed from  Map coords
                       Position changes by randomization
Ray tracer: render enginer computes lighting using randomized position

In above, turbidity and lighting are not computed using the same position. Often a small difference does not matter, because the effect is quite blurry. However, a perfectly exact method is this:

       Ray tracer: render engine computes position
           Material mapping: parallel map computes Map coords from position
               Material:
                       Position changes by randomization
Map VSL object recomputes Map coords from random position
                       Turbidity is computed from the new Map coords value
Ray tracer: render enginer computes lighting using randomized position

I hope this makes the purpose of the Map object clear.


Best regards,

Vesa

Reply via email to