> I've been playing with populating my home airport's area with buildings  
> derived from OSM floorplan data. I think having many buildings in the  
> correct place greatly improves realism over the current random  
> buildings/sparse static models, especially when you know the area.

This becomes a performance issue eventually - see Paris (France). Random 
buildings scale well for memory and performance because they're numerous 
instances of the same building, so just the various positions need to be stored 
separately - a city full of unique buildings needs to store each building 
uniquely. The basic idea doesn't really scale well.

> However, now the buildings obviously don't match with ground textures or  
> random trees. Any bright ideas how to achieve this? I know I could  
> follow the photoscenery approach and pre-render special materials and  
> masks for a couple of cities, but that just doesn't scale.

Matching with ground textures - very problematic, as you would need to generate 
unique ground textures on the fly, see below. Matching with random buildings 
and trees - I think the feasibility of using exclusion regions for random 
buildings and trees around static models has been discussed. This needs quite a 
lot of distance tests, but may just be possible.

> I could see a number of  
> advantages/disadvantages here as compared to our current, generic  
> textures:
> + much better autogen scenery possible: many textured streets/railroads  
> without additional scenery vertices

This isn't as impressive as you think - the kind of graphics card that can deal 
with 11.000 unique terrain texture sheets in the scene (you need something of 
that magnitude, see the numbers worked out here  
http://wiki.flightgear.org/Procedural_Texturing#Photo_texturing ) is also the 
kind of graphics card which can go through millions of vertices.

Custom Italy scenery has very high vertex-count roads and rivers - my  GTX 670M 
GPU crunches these just fine up to the visibility where my combined 11 GB of 
memory are filled. 

If you think it through, it's much easier to fill memory with textures than 
with vertices - to fill 11 GB memory with unique terrain textures doesn't take 
all that much visibility and resolution.

> + shared models with an individual piece of ground texture

Well, but how does FG know how this is supposed to look? 

Obviously, if you would do it manually, you would blend the individual ground 
texture against the surrounding. Which is bad, because it means you need 
non-local information to get the task done. You'd also want to generate 
different patterns in Europe, the US, Asia,...

I've been working a lot with procedurally generated patterns on the terrain - 
I've devised overlay texture schemes based on Perlin and sparse dot noise, and 
I'm working on 2nd generation cloud-layer generating functions (you call a 
function and get in return the distribution of cloud sprites which corresponds 
to a Mackarel sky for instance). These are not trivial problems, but to 
procedurally generate a credible city/village/town/argiculture pattern, even if 
you know the location of some buildings, is a genuinely tough problem, and even 
if we can find a solution, it's probably rather performance-hungry.

Which brings you (as usual) between a rock and a hard place - you can trade 
between memory usage and performance consumption, but there's no free lunch.

> + get rid of sharp landclass borders

Essentially you're asking for photo-scenery, which would do that,  except that 
the source isn't really an aerial photograph. So you get into almost the same 
pro's and con's of photo scenery if the scenery is pre-generated, and you get 
into significant performance and memory issues way beyond photo scenery if you 
want to do the generation runtime.

> + possibly improved resolution

No, resolution will in fact go much down because of the memory limit. In the 
current scheme using a finite set of terrain textures with procedural overlays, 
we can achieve cm-sized resolution on ground features. If you want to do this 
with unique texture sheets generated by the CPU at scenery load time, you 
better bring a Petabyte-sized graphic memory.

> - eats much more video ram and CPU (but then I have 3 out of 4 idle  
> cores ATM)

Well, the memory really  is the show-stopper. All else could in principle be 
pre-processed and shipped with the scenery. Or, if we had the memory to hold 
the raw data, the GPU could generate all relevant patterns (graphics cards are 
much better at this sort of thing than the CPU). But you can't implement any 
such scheme unless it's completely optional if you have to deal with integrated 
Intel chipsets with 512 MB of memory.

> - probably totally incompatible with the current terragear toolchain

There's that as well.

In the end, if I could make a wishlist how to design the scenery rendering, I 
would probably separate hires sharp features like roads out and describe them 
via vertices and pass the informtion on landclass distribution via a 
meta-texture with relatively coarse resolution and build the actual textures 
procedurally everywhere based on the relative distribution densities of 
features  coming from the meta-texture. Unique buildings would then need to be 
registered on the meta-texture in the scenery generation stage, the actual 
procedural terrain cover would be generated runtime on the GPU.

But we have to compromise - this wouldn't run on the Intels either.

* Thorsten

------------------------------------------------------------------------------
Symantec Endpoint Protection 12 positioned as A LEADER in The Forrester  
Wave(TM): Endpoint Security, Q1 2013 and "remains a good choice" in the  
endpoint security space. For insight on selecting the right partner to 
tackle endpoint security challenges, access the full report. 
http://p.sf.net/sfu/symantec-dev2dev
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to