Hi Tom,

sorry if I sound overly pessimistic, but... several of the potential issues are 
structurally not that different from problems I've encountered in painting 
terrain or setting up credible weather. it doesn't mean that I am in any way 
against your plan, but I want to see if you have any new solutions in mind or 
if we'd just run into more or less known problems. So, let me just pinpoint the 
main challenges I see.

 > The basic idea would be:
> Based on landclass data and OSM roads, generate a unique hi-res texture  
> for a few km out. (maybe using a base texture and overlays as you  
> desribe below.) Gradually reduce texture resolution for terrain further  
> out. (I did some rough estimate which indeed showed I need plenty of  
> video RAM, but not several GBs.) Regenerate the textures as the camera  
> moves.

A LOD system always sounds charming in theory, but I haven't been able to 
really make a good one for clouds for instance. 

Shuffling data in and out of graphical memory is at the moment for me  on a 
high-end machine (GeForce GTX 670M) the single identifiable source of uneven 
framerates. I get 60 fps like a clockwork, unless the system starts bringing in 
large chunks of terrain or new clouds into the scene. Currently we only do this 
once terrain is loaded, you would do it for every LOD stage - so while we might 
be able to keep the total memory occupancy sane, it is very likely that the 
flow of data in and out of memory is much increased, which might make this 
problem worse. Uneven framerates are, I've been told, a no-go for many on this 
list (personally I'm somewhat more tolerant in this department).

Another problem of LOD systems is that you need to hide the LOD  line very well 
- otherwise there's a ring around you where the terrain changes in some way.

As for the generating the resolution levels, there are various ways how this 
could be done:

1) pre-computed LOD level textures shipped with the scenery
+ doesn't need much performance
- needs much HD space, isn't very flexible

2)  LOD-level loading time on the CPU
+ needs no harddisk space, can respect current environment conditions to some 
degree
- creates a very uneven performance load dependent on airspeed, all textures 
cost the same memory as pre-computed textures

3) per frame on the GPU
+ needs comparatively little memory, has very even performance load, LOD-levels 
can be implemented fairly trivially (if you don't need it, don't compute it), 
can immediately adjust to environment conditions
- eats plenty of GPU performance (but then, working with textures is what GPU 
fragment pipelines are built for, so there's plenty of hardware support for 
that)

You seem to think of option 2) whereas I mainly work with 3).


> Terrain that is 100km out doesn't need 1m/px resolution. I'm certainly  
> thinking of a LOD scheme here, so I won't need 11.000 unique texture  
> sheets.

Well, you don't need 11.000 hires texture sheets, but you do need 11.000 unique 
texture sheets unless you want to have a graphical discontinuity where a 
default texture sheet is replaced by a completely different-looking 
specially-designed hires texture sheet. A texture 100 km distant can have a 100 
m per pixel resolution, but it still needs to be an averaged variant of the 
later hires texture.

That's my main problem with a cloud LOD scheme. I know how to create a very 
cheap to render cloud 100 km distant. I know how to create a nice-looking cloud 
close-up. What I don't know is how to replace one by the other without clouds 
suddenly changing the visual appearance completely. 

In other words, the problem is that the lowres LOD levels still need to know 
what is painted on the hires LOD levels, but you somehow need at achieve this 
without actually creating the hires version, because if you create the hires 
version in every instance, the performance gain is pretty much gone.

Procedural texturing on the GPU can do that by simply filtering the hires 
structures out dynamically once they get smaller than a pixel. Textures do it 
by mipmapping. But how do you want to do it with a runtime-generated texture? 
Somehow you need to create the whole hires texture sheet, then mipmap it down 
to the resolution you need, and then throw away the hires information to free 
memory - but that is a very expensive scheme, as there are plenty of texture 
sheets to be generated to fill a ring 100 km out.

> I don't get this yet: why is blending the texture against the  
> surrounding bad, and what's the problem with non-local information?

Blending isn't a unique procedure. Taking a sand texture with 50% alpha and a 
rock texture with 50% alpha usually works in a credible way and gives me the 
appearance of sand-covered rock, but blending city texture with 50% alpha with 
forest texture with 50% alpha looks plain silly, if you want to create a rough 
stand-in for a park-filled city, you need to create noise at the size scale of 
the parks, and then use the noise value to choose either city texture or forest 
texture to get a half and half distribution.

Some features (forests, agriculture, roads, ...) have very sharp boundaries in 
reality, other landclasses (shrubcover, grasscover, herbtundra) blend more or 
less smoothly into each other, some (water, sand,...) generate characteristic 
transition regions like beaches,... urban terrain doesn't blend with any smooth 
noise but on a house-by-house basis into the surroundings,...

Blending is bad because it's complicated to get right - if you do it by hand 
you find it obvious how it should be done, but if you are asked to write down 
the rules so that the computer can do it automatically, it's going to be a 
lengthy instruction book taking lots of parameters into account. Just try 
writing up plausible blending rules between all landclasses we have :-)

The problem with non-local information is that it can't readily be done on the 
GPU, because real-time rendering is geared towards processing local information 
only. In practice, a vertex shader doesn't have access to the information of 
other vertices or mesh connectivity, a fragment shader has access only to its 
pixel (and I think the immediate surrounding).  If the GPU would offer an easy 
option to do non-local blending, we'd not have any landclass seams right now. 

> Yes, I'd be happy to generate different patters for different countries.  
> If the code supports it, artists will step in here.

So your instruction book how to do blending gets even longer... Artists can't 
step in unless they can code their artistry in an algorithm. It's supposed to 
run automatically on the CPU, the code can't ask an artist how to do something.

> Great. Can we have overlays for a finite set of buildings?

You can overlay any pattern on anything as long as you can give me a 
fast-evaluating function which generates the pattern. Coming up with the 
function is the problem, and Perlin noise simply doesn't work for buildings. Do 
you have an idea what the function would look like?

> Sounds like a good plan to me.
> As for the Intel graphics argument, I'm with Gene.

Yeah, sure. Because neither of you hangs out so much in the forum trying to 
provide support for users for which FG doesn't run...

* Thorsten
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_mar
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to