Stuart wrote:

> Personally I'd really like to see the rendering systems unified, even
> though I don't have enough GPU to run them both together (or indeed
> Rembrandt with shadows).  It's just make for a more consistent  
> experience. (...)
> I had a look at this a couple of weeks ago but
> didn't get anywhere significant as I didn't know whether the atmospheric
> scattering should be added to the fog pass or not.

I think it's even conceptually not so easy. I guess the idea of Rembrandt 
summarized in one sentence is 'compute realistic illumination and shadows in a 
scene with multiple light sources' where deferred rendering is the tool of 
choice. 

The idea of atmospheric light scattering summarized in one sentence is 'compute 
illumination where the light you see is a weighted line integral along a ray 
with the light at each ray point scattered in from everywhere else in the 
scene'. There are various tools used to solve that problem in real time - the 
skydome actually computes the line integrals numerically by summing over a 
number of points on the ray. The lower atmosphere fog code uses an analytical 
solution of the light diffusion equation in an optically thick medium. Light 
attenuation by clouds is treated on average, light attenuation by terrain 
obstacles is treated with a trick, directional scattering of light for low sun 
uses a parametrized approximation to the real solution, there's an irradiance 
map at work to capture diffuse light from the sky vs. much reduced diffuse 
light from terrain,...

The point being - the real challenge in atmospheric light scattering is not to 
compute the light on a vertex - it's to compute the light color of the fog 
obscuring the vertex by doing the line integral along the ray of eye to vertex. 
You may for instance notice this in things like looking down through a 6/8 
cover - the terrain is pretty much dark, but the haze that obscures it, as seen 
from above is bright. If you then go down, there's an intensity and hue 
gradient in the fog - fog gets brighter above and darker below and even if 
illuminated by colored sunset light from above changes hue to dull grey below. 
These things are usually not noticed if done right, but they leave an 
impression that something's wrong with the scene if they're not done right.

At this point, the complications of merging that with Rembrandt probably become 
apparent, and to my mind that's really a question of doing more R&D, not simply 
of merging code. just some example questions:

Say we have a mountain sticking above 6/8 cloud cover and we can also look into 
the valley. Above the cloud cover, Rembrandt needs to do shadows as usual, but 
below in the valley there are no more shadows. Does Rembrandt do cloud shadow 
explicitly eventually (?) - then this isn't an issue, but we still need to 
shade the fog beneath the clouds, because a shadow map doesn't really do that 
(?). Or does it not - then how does Rembrandt learn to modify the shadows?

Imagine 5 km visibility at night approaching an airport - what's the most 
prominent thing you see? Bright orange glowing fog I guess - all the light 
sources at the airport act as sources for diffuse light scattering in the fog. 
Now, it's relatively easy to come up with an analytic solution to light 
diffusion for one light source which is always outside the fog, but it's a 
completely different beast to handle an arbitrary number of light sources at 
any position. There is no general solution which even remotely solves in real 
time, and thus the unanswered question is - is there a viable approximation 
scheme we can use?

Think heavy fog and using your own landing lights - you see bright backscatter 
from the fog from the cockpit, and on the ground you see a very prominent Mie 
forward scattering peak (think rainy night with cars on the opposite lane).

So trying to merge the ideas to create a consistent rendering scheme is much 
trickier than merging the codes to get some features from one framework, some 
from the other. Even just merging the codes probably is tricky - in the default 
fog scheme, fogging is trivial - fog has one color and is applied based on 
distance. In atmospheric ligth scattering fogging and light computations are 
basically inseparable and a large part of the shader is busy just determining 
the fog color, so this might even require changes in the Rembrandt workflow (I 
know too little about it to give a more solid statement).

To me, the message is - if we want this, we need to throw a lot of manpower and 
time at the problem. If I start doing this on my own, we can wait till FG 4.0 
and might have something. This is of a magnitude that it probably requires 3-4 
people investing a lot of time and coordinated effort here to get right. And 
it's forseeable that it will be slow on the majority of computers. 

So - is it still worth trying it?

* Thorsten
------------------------------------------------------------------------------
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to