Hello,

> I do have a few questions though :
> Does the current code that you have handle texture paging?

Yes, textures and geometry are paged and decompressed asynchronously in the
background (seperate thread). The engine supports image compression to save IO
(and possibly bus) bandwith, e.g. JPEG and S3TC compression. The first maybe
quite taxing on the CPU, so we usually only use JPEG for the finest detail
level textures, which account for most of the data, and S3TC for the lower
detail levels.

> What sort of texture resolutions will it be able to scale down to?
> (meters/pixel)

The rendering is output sensitive, so only visible detail accounts for scene
complexity. However, updates (i.e. paging&decompressing) can be a bottleneck;
if you're moving fast, you could get into trouble trying to update all the
high-res textures. The easy solution is to limit texture and geometry detail
as a function of speed - i.e. don't display 1 m textures at mach 5 (motion
blur!).

The real problem is that it's hard to get detailed textures for the whole
world (and storage hungry!!). What I'd like to experiment with later on is to
let a classifier run over the globally available 28.5m landsat textures, and
use the resulting classifications to generate missing detail at runtime. But
first things first...

> How is the mipmapping handled (if it currently uses mipmaps)?

Well, in a way, the texture LODs emulate aspects of mipmapping. The
ground texture is partitioned in a quadtree scheme, where each quadtree node
holds part of the texture at constant resolution  (e.g. 128x128 pixels). The
root covers the whole texture domain, and children always cover their
respective quarter of the parents domain. So, effectively, each parent is a
downsampled version of its children. 
The LODs are choosen in a way which ensures that supersampling orthogonal
to the viewer is limited by a factor of 2 (the factor can be higher along
the viewing direction, however). Together with anistotropic filtering, this
gives very good results.

> What will the maximum visual range be?

Also depends on the available detail, resolution, permitted screen space error
- hard to tell, but I think nothing to worry about. For example, I get good
performance (1024x768, Duron 1GhZ, GeForce3, Mach2) without limiting
visibility for a whole UTM-zone dataset (with 28.5 m textures, normal maps and
SRTM3 elevation), that should be a few hundred kilometers of visual range. 
As stated earlier, the nearer (fast-moving) detail is more problematic than 
the distant scenery because of the frequent updates; for the same reason, 
hard turns are evil :-)

hope to have answered your questions,

 Manuel

_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d

Reply via email to