First of all, a big thanks to Tim - Akenine-Moller &Co. is indeed quite an 
interesting read, and in addition to gaining a better understanding, I've so 
far experimented successfully with a heightfield for parallax and normal 
mapping and a simple irrandiance map instead of the ambient term in lighting. 

For those not inclined to read through a longer email, my question up front: I 
have a function, I want a texture holding the function values  pairs - how do I 
map this without editing every pixel offline? And can we do this also online? 
Where is for instance the 3d noise texture defined and filled - is it simple to 
change the procedure?

For those interested in the context:

I have managed to create quite compelling terrain visuals from close-up (see 
here 
http://www.flightgear.org/forums/viewtopic.php?f=47&t=16884&start=15#p162613 
for two examples), but rendering seems to get too slow too soon for anything 
really fancy (of course, a big roadblock is that I'm implementing all this on 
top of atmospheric light scattering which is slow to begin with - I think in 
the default light and fog scheme it might run with decent speed).

One problem that really kills framerate is de-tiling. In order to de-tile 
properly, one needs to mix noise from different wavelengths such that the 
interference creates a pattern with a much larger periodicity. Three 
wavelengths usually do the job well, two are marginally acceptable, one is 
quite bad.

So, for each texture component (snow, gradient texture, detailed texture, 
height map, fog) I ideally need three noise wavelengths. In the current 3d 
noise, noise[0] contains structures with a qualitatively different 
distribution, so I have to discard that component. noise[3] is too fine and 
runs into texture resolution problems, so I have to discard that as well. Since 
the noise pattern associated with different texture components needs to be 
uncorrelated (otherwise there's yet another form of tiling around the corner) I 
need to do different noise lookups for every texture component, arranging them 
in a slightly different wavelength interval.

And that's a lot of texture3D() calls which really make the system slow.

I could do it in half the time if I would get 4 useful components for every 
texture3D() call, but that requires that I either create a suitable texture 
offline or change the way it is done in the code (I'm also exploring if there 
is perhaps a function that evaluates faster than a texture lookup call...).

Somewhat inversely, I'm also wondering if a simple texture1D() lookup might not 
be faster than evaluating the light function e / pow((1.0 + a * exp(-b * (x-c)) 
),(1.0/d)) three times to get an (rgb)-triplet. However, I have no clue how to 
dump the function into a texture without hand-crafting every pixel.

Finally, I think given that we have cloud position and sizes available, it'd be 
fairly trivial to cast that into a function that defines a shadow-map for 
clouds (I would not render the clouds to create that map, first because of 
transparency, second because they're really billboards and likely to come out 
wrong, third because the map wouldn't change fast and finally because it's 
faster to do one analytical function per cloudlet than to go through the 40 
texture sheets that are in a cloudlet.

So, any pointers along that front would be appreciated for a number of 
reasons... Thanks in advance!

* Thorsten
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to