Re: [Flightgear-devel] config.sub ?

2005-07-30 Thread Manuel Massing
Hi,

 I don't see a config.sub. Where does that come from?

Have you tried automake -a? For never autoconf versions,
you can also use autoreconf -i to bootstrap the autoconf system.

Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] asymetric frustum

2005-07-08 Thread Manuel Massing
Hello David,

 I saw few months ago some posts about asymetric frustum for a screen wall.
 I got
 a similar installation so I will have three displays, each of them with
 asymetric frustum (the point of view is not centered on the screens). I
 will have these parameters :

 LeftScreen_Left_FOV
[...]
 RightScreen_Bottom_FOV

Well, as Curt stated already, that depends on how these displays should
be aligned. 

Assuming you want to simply combine three coplanar, untilted views (e.g. for a
multi-projector configuration on a single projection screen), and if these
parameters are angles relative to the viewing direction, it should
work as follows:

-create a symmetric frustum with FOV of
hor_fov = 2*max(-LeftScreen_Left_FOV, RightScreen_Right_FOV)
vert_fov = 2*max(-CenterScreen_Top_FOV, 
Center_Screen_Bottom_FOV)

-calculate the viewport coefficients [in range 0..1] by
LS_l_coeff = 0.5 + 0.5*tan(LS_Left_FOV)/tan(hor_fov)
LS_r_coeff = 0.5 + 0.5*tan(LS_Right_FOV)/tan(hor_fov)

LS_t_coeff = 0.5 + 0.5*tan(LS_Top_FOV)/tan(vert_fov)
LS_b_coeff = 0.5 + 0.5*tan(LS_Bottom_FOV)/tan(vert_fov)

etc.

These can then be set via the property system, i believe. Curt will surely
be able to enlighten you on this.

hth,

Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] opengl texgen - projected textures

2005-06-05 Thread Manuel Massing
Hello Harald,

 I am trying to project a texture on the scenario background. A priori
 everything is setup correctly and the code should
 work but at the end I only get a projection on the screen space.
 picture here :

hmm, don't have the time to delve deeper into your example, but you seem to
be using the same projection matrix for light and camera, so the outcome is
expected...

keep in mind that OpenGL will multiply the texgen matrix with the inverse
of the camera matrix at time of specification. So you have to make sure that
modelview is set to the camera view at that point. If you use the
light point of view, the texgen results in an identity transform + projection.
If the light projection matrix matches the camera, fragment and
texture coordinates will be identical (modulo bias), thus the screen plane
alignment of your projection.

Also, a possible caveat: depending on the storage layout of SGMatrix, you
might need to feed OpenGL the transposed texgen matrix as plane vectors (and
use the transposed bias matrix i have seen lying around :-)).

bye,

Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] new 3d clouds - strange movement

2005-05-07 Thread Manuel Massing
Hi,

 I think that you have that effect if you fly to the border of a cloud.
 The quads are rotated to face the camera and when the quads are very
 near on the left or the right the rotation is too big and the quad go
 out of sight. This will be corrected.

so are you using billboards rather than impostors? Is a single cloud
represented by multiple billboard primitives or by a single quad? 

In the latter case, you could try to use impostors with fixed world-space
orientation, which are invalidated (updated) above a certain viewing angle
treshold and distance treshold (which depend on the extent of the cloud
bounding box and impostor texture resolution, respectively).

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] RFC: Eliminating jitter

2005-05-04 Thread Manuel Massing
Hello Erik,

sorry for the late reply.

 I'm trying to get this included in CVS but it is a bit out of sync.

yep, I'm really getting a bad conscience about posting an unsync'ed patch :-)
I'm a bit busy at the moment, but I'll try to find some time to get the
flightgear modifications sorted out (if you haven't done so already)...

Unfortunately, I did the necessary modifications in the same breath as the
scenery API restructuring, so I don't have anything which would apply to
current CVS. Anyway, I think I had that API nearly finished, maybe I'll be
able to get that into a 'commitable' state instead.

   Calling set_tile_center is not needed anymore, right?
   How do I handle situations where get_tile_center is called?

I think you can safely use globals-get_scenery()-get_center().
But IIRC, get_tile_center() was anyway mostly used as the (unnecessary)
parameter for the get_absolute_view_pos() call ...

Generally, relative coordinates should only be used locally, both in time
(because moving reference frames invalidate the old coordinates) and scope
(don't pass relative coordinates among objects or unrelated functions, which
might use different reference frames). If coordinates are passed as
SGLocation (i.e. absolute), there should be no problem, and I think that's
warranted with the current FG implementation.

The tile center can be managed outside the SGLocation class, e.g. by
getting it from the scenery instance or whatever reference frame is convenient
for local calculations. That's why I thought it would fit in with Mathias'
patch.

bye,
 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] RFC: Eliminating jitter

2005-04-28 Thread Manuel Massing
Hi Mathias,

this reminds me that I had implemented the attached cleanup-patch for 
SGLocation (as part of the abstract terrain API).  I currently don't have the
time to finish the API, so the patch is a bit out of context (requires some
small changes in flightgear), but if you are currently working on the
transform system you might find parts of it useful...

It does take care of a few oddities in SGLocation [e.g. getAbsolutePosition() 
required the scenery_center as parameter (but does not depend on it), while 
get_relative_pos took no argument etc.]. Dunno if these oddities really 
manifest as bugs, but IMO they made the placement/transform system quite hard
to understand and use.

cheers,

 Manuel
? SGLocation.diff
Index: location.cxx
===
RCS file: /var/cvs/SimGear-0.3/SimGear/simgear/scene/model/location.cxx,v
retrieving revision 1.6
diff -C2 -r1.6 location.cxx
*** location.cxx	19 Nov 2004 21:44:17 -	1.6
--- location.cxx	28 Apr 2005 15:42:43 -
***
*** 94,98 
  dst[0][3] = SG_ZERO ;
  dst[3][3] = SG_ONE ;
- 
  }
  
--- 94,97 
***
*** 104,108 
  // Constructor
  SGLocation::SGLocation( void ):
! _dirty(true),
  _lon_deg(-1000),
  _lat_deg(0),
--- 103,107 
  // Constructor
  SGLocation::SGLocation( void ):
! _position_dirty(true), _orientation_dirty(true),
  _lon_deg(-1000),
  _lat_deg(0),
***
*** 111,120 
  _pitch_deg(0),
  _heading_deg(0),
! _cur_elev_m(0),
! _tile_center(0)
  {
  sgdZeroVec3(_absolute_view_pos);
- sgZeroVec3(_relative_view_pos);
- sgZeroVec3(_zero_elev_view_pos);
  sgMakeRotMat4( UP, 0.0, 0.0, 0.0 );
  sgMakeRotMat4( TRANS, 0.0, 0.0, 0.0 );
--- 110,116 
  _pitch_deg(0),
  _heading_deg(0),
! _cur_elev_m(0)
  {
  sgdZeroVec3(_absolute_view_pos);
  sgMakeRotMat4( UP, 0.0, 0.0, 0.0 );
  sgMakeRotMat4( TRANS, 0.0, 0.0, 0.0 );
***
*** 127,148 
  
  void
- SGLocation::init ()
- {
- }
- 
- void
- SGLocation::bind ()
- {
- }
- 
- void
- SGLocation::unbind ()
- {
- }
- 
- void
  SGLocation::setPosition (double lon_deg, double lat_deg, double alt_ft)
  {
!   _dirty = true;
_lon_deg = lon_deg;
_lat_deg = lat_deg;
--- 123,129 
  
  void
  SGLocation::setPosition (double lon_deg, double lat_deg, double alt_ft)
  {
!   _position_dirty = true;
_lon_deg = lon_deg;
_lat_deg = lat_deg;
***
*** 153,157 
  SGLocation::setOrientation (double roll_deg, double pitch_deg, double heading_deg)
  {
!   _dirty = true;
_roll_deg = roll_deg;
_pitch_deg = pitch_deg;
--- 134,138 
  SGLocation::setOrientation (double roll_deg, double pitch_deg, double heading_deg)
  {
!   _orientation_dirty = true;
_roll_deg = roll_deg;
_pitch_deg = pitch_deg;
***
*** 160,256 
  
  double *
! SGLocation::get_absolute_view_pos( const Point3D scenery_center ) 
  {
! if ( _dirty ) {
! recalc( scenery_center );
! }
  return _absolute_view_pos;
  }
  
  float *
! SGLocation::getRelativeViewPos( const Point3D scenery_center ) 
  {
! if ( _dirty ) {
! recalc( scenery_center );
! }
  return _relative_view_pos;
  }
  
! float *
! SGLocation::getZeroElevViewPos( const Point3D scenery_center ) 
! {
! if ( _dirty ) {
! recalc( scenery_center );
! }
! return _zero_elev_view_pos;
! }
! 
! 
! // recalc() is done every time one of the setters is called (making the 
! // cached data dirty) on the next get.  It calculates all the outputs 
! // for viewer.
! void
! SGLocation::recalc( const Point3D scenery_center )
  {
! 
!   recalcPosition( _lon_deg, _lat_deg, _alt_ft, scenery_center );
! 
!   // Make the world up rotation matrix for eye positioin...
!   sgMakeRotMat4( UP, _lon_deg, 0.0, -_lat_deg );
! 
! 
!   // get the world up radial vector from planet center for output
!   sgSetVec3( _world_up, UP[0][0], UP[0][1], UP[0][2] );
! 
!   // Creat local matrix with current geodetic position.  Converting
!   // the orientation (pitch/roll/heading) to vectors.
!   MakeTRANS( TRANS, _pitch_deg * SG_DEGREES_TO_RADIANS,
_roll_deg * SG_DEGREES_TO_RADIANS,
!   -_heading_deg * SG_DEGREES_TO_RADIANS,
!   UP);
  
!   // Given a vector pointing straight down (-Z), map into onto the
!   // local plane representing horizontal.  This should give us the
!   // local direction for moving south.
!   sgVec3 minus_z;
!   sgSetVec3( minus_z, 0.0, 0.0, -1.0 );
! 
!   sgmap_vec_onto_cur_surface_plane(_world_up, _relative_view_pos, minus_z,
!  _surface_south);
!   sgNormalizeVec3(_surface_south);
! 
!   // now calculate the surface east vector
!   sgVec3 world_down;
!   sgNegateVec3(world_down, _world_up);
!   sgVectorProductVec3(_surface_east, _surface_south, world_down);
! 
!   set_clean();
! }
! 
! void
! SGLocation::recalcPosition( 

Re: [Flightgear-devel] Camera/FOV/View Frustum question.

2005-03-04 Thread Manuel Massing
Hi,

 If you want to project an image from a single projector onto a curved
 wrap-around screen, could you just use a normal projector, and add a
 fish-eye lens of some sort?  Something like a glass cylinder cut in half
 vertically, with the flat part facing the projector, and the curved part
 towards the curved screen?  I'd imagine a fairly simple view frustum
 could compensate for the horizontal stretching effect of the lens.

A more general solution (for non-linear distortions, e.g. a planar
projection onto a curved screen) would be to render to a texture, and map this
texture to a tesselated imaging plane with appropriate texture coordinates (to
yield a piecewise-linear approximation of the inverse distortion). This would
allow for fairly flexible projection setups, and shouldn't add noticeable
(if any) overhead for hardware with render-to-texture support (e.g. OpenGL 1.5
framebuffer objects).
The only tradeoff is uneven sampling of the projection area (because the area
covered by a pixel on the projection screen is a function of the angle of
incidence), which shouldn't be a big problem if the screen is not to strongly
curved.

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] Camera/FOV/View Frustum question.

2005-02-25 Thread Manuel Massing
Hello Curtis,

 First let me explain what I need to do.  I need to configure an
 asymmetric view frustum.  I need to place 3 monitors next to each
 other, aligned along a flat plane.  The view drawn in each monitor needs
 to be projected on that same flat plane.  I cannot just set a view
 offset for the side channels because the view won't come out right.  If
 I simply rotate the view offset and take a symmetric view frustum, the
 plane of projection will be perpendicular to the viewer in each
 channel.  That won't work for this particular application and it's not
 what I need.  I need to configure an actual asymmetric view frustum for
 each side channel.  If someone thinks they can help me, but is confused
 by my description, I'm happy to explain this further.  Think about this

I hope the attached camera class might help you - I implemented support for
off-center and tiled projections for a similar project (powerwall
rendering).

The parts that probably interest you are in update_frustum() and the
parameters set via the setTiling(...) method.

hope to help,

 Manuel
/***
  Camera.cpp  -  description
 ---
begin: Thu Apr 18 2002
written by   : Manuel Massing
email: [EMAIL PROTECTED]
 ***/

/***
 * *
 *   This program is free software; you can redistribute it and/or modify  *
 *   it under the terms of the GNU General Public License as published by  *
 *   the Free Software Foundation; either version 2 of the License, or *
 *   (at your option) any later version.   *
 * *
 ***/

#include Camera.h

Camera::Camera()
{
	n = 1.0f;
	f = 4000.f;
	fov = 45.f;
	eye_separation = 0.f;
	focal_length = 2000.f;

	resx = 1; resy = 1;
	llx = lly = 0;
	aspect = 4.f/3.f;;
	tile_t = tile_r = 1.f;
	tile_b = tile_l = 0.f;
	frustum_dirty = orientation_dirty = obj2clip_dirty = true;
}

void Camera::update_frustum() const
{
	if (frustum_dirty) {
		// Calculate the symmetric frustum extent at focal distance.
		float frustum_top = focal_length*tanf(fov*M_PI/360.f);
		float frustum_left = -frustum_top*aspect*((tile_t-tile_b)/(tile_r-tile_l));

		// Adapt frustum to desired viewport region.
		t = (2.f*tile_t - 1.f)*frustum_top;
		b = (2.f*tile_b - 1.f)*frustum_top;
		l = (1.f - 2.f*tile_l)*(frustum_left - eye_separation);
		r = (1.f - 2.f*tile_r)*(frustum_left - eye_separation);

		float rescale = n/focal_length;
		t*= rescale;
		b*= rescale;
		l*= rescale;
		r*= rescale;
	
		local_clipplane[cpNear] = Plane(Vector3d(0.f, 0.f, -1.f), n); // near
		local_clipplane[cpFar] = Plane(Vector3d(0.f, 0.f, 1.f), -f);  // far

		Vector3d cpn(n, 0, l);
		cpn*=1.f/cpn.length();
		local_clipplane[cpLeft] = Plane(cpn, 0);  // left

		cpn = !Vector3d(-n, 0, -r);
		local_clipplane[cpRight] = Plane(cpn, 0); // right

		cpn = !Vector3d(0, -n, -t);
		local_clipplane[cpTop] = Plane(cpn, 0);   // top

		cpn = !Vector3d(0, n, b);
		local_clipplane[cpBottom] = Plane(cpn, 0);// bottom

		frustum = Matrix4x4( 2.f*n/(r - l),  0.f, (r + l)/(r - l),   0.f,
		 0.f,2.f*n/(t - b),   (t + b)/(t - b),   0.f,
		 0.f,0.f,-(f + n)/(f - n),  -2.f*f*n/(f - n),
		 0.f,0.f,-1.f,   0.f);


		#ifdef LOGGING
		if (hasLogfile())
			get_logfile().setItem(fov, fov);
		#endif

		frustum_dirty = false;
		obj2clip_dirty = true;
	}
}

void Camera::update_modelview() const
{
	if (orientation_dirty) {
		if (vp == vpLookAt) {
			Vector3d xAxis, zAxis, tmp_up = up;

			zAxis = !(lookat - pos);
			xAxis = !cross(zAxis, tmp_up);
			tmp_up = !cross(xAxis, zAxis);

			Matrix3x3 rotation = Matrix3x3(xAxis, tmp_up, -zAxis).transpose();
			modelview = Matrix4x4(rotation, rotation*(-pos - eye_separation*xAxis));
		}
		else if (vp == vpTargetRoll) {
			Vector3d xAxis, zAxis, tmp_up = up;
			zAxis = !(lookat - pos);
			// Make sure we are not looking down the y-axis
			if (!zAxis.x  !zAxis.z)
tmp_up = Vector3d(-zAxis.y, 0.f, 0.f);
			else
tmp_up = Vector3d(0.f, 1.f, 0.f);

			xAxis = !cross(zAxis, tmp_up);
			tmp_up = !cross(xAxis, zAxis);

			Matrix3x3 rotation = Matrix3x3(xAxis, tmp_up, -zAxis).transpose();
			Matrix3x3 camroll;
			camroll.setRotateZ(roll);
			rotation = camroll * rotation;
			modelview = Matrix4x4(rotation, rotation*(-pos - eye_separation*xAxis));
		}
		else

Re: [Flightgear-devel] Camera/FOV/View Frustum question.

2005-02-25 Thread Manuel Massing
Hello again,

sorry, I didn't read your whole mail, so my response 
was probablly not relevant to your problem.

 Going back to my original query.  My little experiment to pan the view
 frustum side to side using this technique worked great in all the
 external views, but totally screwed up the outside world in view #0 ...
 I ended up with extreme overzoom and no panning relative to the outside
 scenery, but the 3d cockpit worked ok and panned as I was hoping it would.

This is just a wild guess, but it might have to do with the near-far 
clipping planes being changed for the scenery rendering in renderer.cxx. 
This results in an update of the frustum parameters, so maybe you will need to
reapply your viewport modifications at different stages?

Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


[Flightgear-devel] property sytem - design guidelines

2005-02-19 Thread Manuel Massing
Hi,

just a quick question: are there any guidelines regarding the scope at which
property system values should be accessed? 

E.g., when should classes communicate values via the property system (if
at all)? At which level in the program structure / by which
entities should the property system be used? 

thanks,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] Runway lighting - What happened to the new terrain engine?

2005-01-30 Thread Manuel Massing
Hi,

 I'm not sure whether emmisive, specular and diffuse lighting might give
 a different result here.

Hmm, I don't think things are that dramatic... Admittetly, the following
thoughts apply only to local (per-texel) image differences, but 
the big picture shoudln't be worse of.

The specular term of a local reflection model normally does not depend
on the object surface color, it merely emulates a highlight. Diffuse lighting
could amplify a visual error locally if the local contribution of all light
sources is larger than 1 (I assume emissiveness is just a constant light
contribution term, similar to ambient). For a given wavelenght:

diffuse_exact = min(1.0, lossless_surface_color*light_contributions)
diffuse_lossy = min(1.0, lossy_surface_color*light_contributions)

It follows that 

(diffuse_exaxt - diffuse_lossy) = (lossless_color - 
lossy_color)*light_contributions

I.e., the difference image is scaled by the light source contributions. 
So, only if an area is strongly lit (contribution  1), artefacts will become
more noticeable than in an unmodulated view of the image.

Also keep in mind that most image differences will probably be in the same
ballpark as the discretisation error; so the same problem presents itself with
lossless textures, but probably in a less structued and therefore less 
noticeable manner. You just can't get dynamic range where there wasn't, same
goes for spatial resolution. 

Don't take my musings as gospel, though :-)

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] Runway lighting - What happened to the new terrain engine?

2005-01-29 Thread Manuel Massing
Hello,

 Norman just pointed JPEG 2000 out to me which is open source (and royalty
 free for GPL projects) and far better than the standard JPEG most of us
 use. It uses state-of-the-art wavelet compression and some of the
 comparisons I've seen are incredible. It supports both lossless and lossy
 compression.

 Some comparisons :
 http://www.leadtools.com/SDK/Raster/Raster-Addon-JPEG2000-Example.htm
 http://www.geocities.com/ee00224/btp2.html

 It could be worthwhile looking into if we need to store large images.
 The SDK with source code is available at http://www.ermapper.com

The terrain engine also supports the jasper JPEG2000 library. Unfortunately,
the last time I tested, JPEG2000 decoding performed badly (in terms of
runtime) compared to optimized JPEG decoding routines. 

cheers,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] Runway lighting - What happened to the new terrain engine?

2005-01-29 Thread Manuel Massing
Hi,

 For normal photographs that's great - for textures that get scaled,
 projected, sheared (sp?), lit, ... the uses assumptions dodn't hold
 anymore.

Why should projection, shearing, scaling be a problem? Wouldn't every JPEG
image displayed on your computer screen would look lousy when looking at your
display from an angle, if this was generally true? 

The problem here is resampling (as indeed happens with texture mapping). If
you sample your textures incorrectly, things might look worse than expected,
but nobody says it isn't possible to do things correctly. 

This is why I don't like the article you referenced; it sure has some valid
points, but they are presented in a manner which is a bit across-the-board.

Lighting can be more of a problem, but areas are more often dimmed than
fully lit, so I'd assume the the difference (i.e. error) between lossless and
lossy texture is also dimmed. 

The one thing that really looks crappy is JPEG-compressed normal maps.

 An extreme example: when you use a very high compression rate you'll see
 the blocking artefacts. So you use a not so high compression and are
 hapy with the result. If you zoom into the picture you'll start to see
 the blocking again as the pixels got large enough.

If you zoom in even further, you will single out individual pixels. 

 So JPEG isn't usefull.

Well, I think it is. Is surely isn't optimal, but things don't look nearly 
as bad as one would assume after heaving read JPEG's are evil, too :-)

 One solution that might work could be wavelets. (This is where JEPG2000
 gets interesting again). But the wavelets used would need to be choosen
 carefully.

Maybe the gwic library would be worth a look...

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] Runway lighting - What happened to the new terrain engine?

2005-01-29 Thread Manuel Massing
Hello Oliver,

 There is a trick to create textures with a 15 m resolution based on landsat
 data:
 http://www.terrainmap.com/rm29.html

yes, fusing the panchromatic channel is a nice option.
Ideally, one should devise an algorithm which can do the fusing at runtime
(e.g. in the pixel shaders), so that the color information for the
panchromatic channel doesn't need to be stored redundantly.

 BTW:
 Is it possible to use this classifier to create a new vector map with a
 larger landcover variety than Vmap0?

well, the classifier you are referring to does not (yet) exist :-)
But I know a graduand here at the university who is working on such a
thing... I'll have to ask him how robust and usable it is (and if we
could use it). I know it's supervised, so theoretically you should be able to
train it all the classes you want, but that doesn't mean the classification
will be robust.

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] Runway lighting - What happened to the new terrain engine?

2005-01-28 Thread Manuel Massing
Hello,

 As with everything, really, the key here is integration.  Make it work
 with FlightGear so we can test.  Saying here is code, can we use it?
 just isn't enough.  It needs to be here is a patch, try it and tell
 me what breaks.  Until we get that far, there really isn't much to
 argue about.

I completely agree with you on the integration part. I think the engine
is technically adequate for its intended purposes (i.e. satellite-textured 
landscapes). If you have any questions concerning the technical side, feel
free to ask. In this light, its also important to see it as an alternative,
not a replacement, for the current scenery, because each engine will have its
own set of advantages and disadvantages. 

By using an abstract API, a terrain engine could be choosen at runtime.
But it will definitely take some work to abstract out the terrain engine.
The good thing is, such an abstract API would make the scenery subsystem 
more modular and easier to use than in its current, tightly coupled form. 

I have attached what I could imagine as a useable terrain API (modulo
conflicts with reality :-)).

regards,

 Manuel
/**
 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License as
 * published by the Free Software Foundation; either version 2 of the
 * License, or (at your option) any later version.
 *
 * This program is distributed in the hope that it will be useful, but
 * WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
 * General Public License for more details.
 *
 * You should have received a copy of the GNU General Public License
 * along with this program; if not, write to the Free Software
 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
 * \short Abstract class to define the API for the terrain rendering subsystem.
 *
 * written by Manuel Massing, (c) 2004 by Manuel Massing
*/
 
#ifndef TERRAIN_H
#define TERRAIN_H

#include FDM/flight.hxx
#include string
 
using namespace std;

class GeocCoord;

/**
 * \short Abstract class which defines an API for the terrain rendering subsystem.
 *
 * Offers methods for:
 * - rendering  detail management
 * - Airport management
 * - collision  elevation queries
 */
class Terrain {
public:
	enum RenderEntity {
		trTerrain  = 1,// The basic landscape
		trRunways  = 2,// Runway structures
		trRunwayLighting   = 4,// Runway lighting
		trStaticGroundObjects  = 8,// Landmarks, buildings, trees which were placed at scenery generation time
		trDynamicGroundObjects = 16// Procedurally generated ground objects, e.g. trees, buildings
	};

	
	
	/**
	 * Prepare rendering for the given viewing paramaters and the 
	 * configured detail levels.
	 */
	virtual bool update(FGViewer *viewer) = 0;

	// Interface it with scene graph or via a render() method?
	// Returning a scene-graph node is probably the better solution,
	
	/**
	 * Render the terrain using the viewer position and the given rendering flags.
	 * \note that rendering entities which where disabled during the update() call (i.e. entities with a detail setting of zero) will not be rendered.
	 */
	virtual void render(RenderEntity renderFlags) = 0;

	/**
	 * Return a scene-graph node which
	 */
	//SGNode *getSceneNode(RenderEntity flags);
	
	/**
	 * Set the detail level for the indicated rendering entity.
	 *
	 * Valid range is 0 (disable rendering) to the value returned by getDetailLevels(enum Renderflags),
	 * which corresonds to maximum detail.
	 *
	 * \param RenderEntity
	 * \param detailLevel The desired detail level, in the range 0 to getDetailLevels(entitiy).
	 */
	virtual void setDetailLevel(RenderEntity entity, const unsigned int detailLevel) = 0;
	
	/**
	 * A clear text (human-readable) explanation of detail level modalities.
	 * e.g. getDetailLevelFeatures(trTerrain, 1) could return
	 *  Render terrain within 32 pixels accuracy.\n
	 *  Disable texture mapping.\n
	 *  Disable shading.\n
	 * 
	 * This is needed to offer an abstracted but descriptive representation for the user interface.
	 */
	virtual string getDetailLevelFeatures(RenderEntity entity, int detail_level) const = 0;

	/**
	 * Indicates whether airport definitions can be dynamically added at runtime.
	 * Otherwise, the terrain implementation only supports pre-compiled airports 
	 * (i.e. airports included at terrain-build time).
	 */
	virtual bool supportsDynamicAirports() const = 0;
	
	/**
	 * Add specified airport.
	 * 
	 * Fails if airport already exists or dynamic airports are not supported.
	 * 
	 * \param ID Zero-terminated string of the airport identifier
	 * \param airport An instance holding all the relevant information about the structure of the airport to be added.
	 *
	 * \returns true on success, otherwise false.
	 */
	//virtual bool addAirport(char *ID, const Airport airport) = 0;
	virtual bool

Re: [Flightgear-devel] Runway lighting - What happened to the new terrain engine?

2005-01-28 Thread Manuel Massing
Hello,

 I do have a few questions though :
 Does the current code that you have handle texture paging?

Yes, textures and geometry are paged and decompressed asynchronously in the
background (seperate thread). The engine supports image compression to save IO
(and possibly bus) bandwith, e.g. JPEG and S3TC compression. The first maybe
quite taxing on the CPU, so we usually only use JPEG for the finest detail
level textures, which account for most of the data, and S3TC for the lower
detail levels.

 What sort of texture resolutions will it be able to scale down to?
 (meters/pixel)

The rendering is output sensitive, so only visible detail accounts for scene
complexity. However, updates (i.e. pagingdecompressing) can be a bottleneck;
if you're moving fast, you could get into trouble trying to update all the
high-res textures. The easy solution is to limit texture and geometry detail
as a function of speed - i.e. don't display 1 m textures at mach 5 (motion
blur!).

The real problem is that it's hard to get detailed textures for the whole
world (and storage hungry!!). What I'd like to experiment with later on is to
let a classifier run over the globally available 28.5m landsat textures, and
use the resulting classifications to generate missing detail at runtime. But
first things first...

 How is the mipmapping handled (if it currently uses mipmaps)?

Well, in a way, the texture LODs emulate aspects of mipmapping. The
ground texture is partitioned in a quadtree scheme, where each quadtree node
holds part of the texture at constant resolution  (e.g. 128x128 pixels). The
root covers the whole texture domain, and children always cover their
respective quarter of the parents domain. So, effectively, each parent is a
downsampled version of its children. 
The LODs are choosen in a way which ensures that supersampling orthogonal
to the viewer is limited by a factor of 2 (the factor can be higher along
the viewing direction, however). Together with anistotropic filtering, this
gives very good results.

 What will the maximum visual range be?

Also depends on the available detail, resolution, permitted screen space error
- hard to tell, but I think nothing to worry about. For example, I get good
performance (1024x768, Duron 1GhZ, GeForce3, Mach2) without limiting
visibility for a whole UTM-zone dataset (with 28.5 m textures, normal maps and
SRTM3 elevation), that should be a few hundred kilometers of visual range. 
As stated earlier, the nearer (fast-moving) detail is more problematic than 
the distant scenery because of the frequent updates; for the same reason, 
hard turns are evil :-)

hope to have answered your questions,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] Runway lighting

2005-01-27 Thread Manuel Massing
Hi,

 in real life.  Currently the lighting at EGLL or KSFO drops my frame rate
 from around 30 to about 10.  Based on a rough estimate of light numbers, I
 reckon that ditching the green taxiway centerlights might get back 3 - 4
 fps, not brilliant but a start.  Note that the EGLL poly count is already
 hitting my frame rate to begin with - at daytime it's about 60 with view
 away from airport, 30 with view including airport.  Then 10 with the
 lighting added.  The frame rate with lighting enabled at EGLL is completely
 independent of anisotropic filtering, FSAA, or screen resolution - it's
 pegged solidly at 10.  I guess it's either CPU or AGP bus limited - any way
 to try and find out / guess which? [AMD XP2000+, GF5900XT 128M, 4x AGP].

most probably CPU limited. I'm not sure, but maybe a profiler could give you
some useful information (if OpenGL symbols and call graph profiling are
available).

Do you have any triangle counts for the particular scene? I assume
your system should be able to render 50 Mio. multi-textured triangles
per second.

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] How to convert from WGS84 coordinates?

2005-01-24 Thread Manuel Massing
Hello Robicd,

   I've made a .ase 3d object (a Villa of my town) for a scenery. I have
 a satellite picture of the place where the Villa resides, which has
 datum wgs84 coordinates of the two corners of the bitmap. I really
 don't know how to convert such coordinates (1st corner is
 353620.2/4225543.6, 2nd corner is 354212.2/4225976.1) to a format
 suitable for a .stg file.

These coordinates don't mean much by themselves, you need to know which
projection they relate to. Probably a UTM projection in your case. I would
recommend you install gdal (http://www.gdal.org/), and use gdalinfo to get
projection information for your file. You can than use gdal_translate + 
gdalinfo or a proj4 tool to convert between projected or pixel coordinates
and lat/lon.

cheers,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] splash screen

2005-01-17 Thread Manuel Massing
Hi,

  I've been holding off my code changes already since Christmas. ;-)

why not tag the planned releases as branches. This way development can
continue in HEAD while the releases can be tested and bugfixed in-
dependently.  This is fairly standard procedure for open source projects (e.g.
KDE, gcc, freebsd).

cheers,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] C++ question

2005-01-15 Thread Manuel Massing
Hello Christian,

If I understand your problem right, you could
use class pointers (but you won't achieve strong 
typing at compile time), or templates.

TEMPLATE EXAMPLE:

template class T class A {
 virtual void foo(T param);
};

CLASS POINTER EXAMPLE:

class A {
public:
 virtual void foo(A* param);
};

class B : A {
 virtual void foo(A* param)  
 {
  B* cast = dynamic_castB*(a);
  if (cast) {
   ...
  }
 }
};

cheers,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] C++ question

2005-01-15 Thread Manuel Massing
 template class T class A {
  virtual void foo(T param);
 };

Maybe I should add how to derive the class B from the template:

class B : public AB {
 ...
};

bye,
 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] alternative terrain engine integration

2005-01-11 Thread Manuel Massing
Hello,

  Would that be possible? What is the policy for gainining
  CVS write access to the fgfs repository?

Hmm, apparently the thread died an abrupt dead, so I humbly
ask again:

What can I do to gain CVS access? If you have any reservations
or further questions about the project, please let me know.

thanks,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


[Flightgear-devel] alternative terrain engine integration

2005-01-10 Thread Manuel Massing
Hi,

I want to start to integrate an alternative terrain engine 
with flightgear 
(http://baron.flightgear.org/pipermail/flightgear-devel/2004-September/030853.html)

For this, I need to adapt flightgear to use an abstract terrain API, which
will encapsulate the current and new terrain engine transparently. 

As this will involve some (mostly small) changes all over the place, it would 
be great if I could work on a CVS branch.
 
Would that be possible? What is the policy for gainining
CVS write access to the fgfs repository?

Of course, I will post planned changes on the mailing lists for 
discussion, but I want to get the bureaucratic stuff sorted out first :-)

cheers,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] alternative terrain engine integration

2005-01-10 Thread Manuel Massing
Hello Erik,

 That's great, I already wondered what happened to that project. This 
 would really be a great addition for FlightGear.

Unfortunately I am studying and currently try to compensate for the tremendous
lazyness of my past semesters :-) So that project had to wait for the
christmas break. 

 As I understood you where using your own SceneGraph engine, what would
 be the best way to handle this;

 1. Adding a SceneGraph API
 2. You change your code to use plib
 3. FlightGear starts to use your SceneGraph library

I am not yet sure what the best solution will be, but I want to
either:

 1) Wrap it into a plib scenegraph node
 2) Abstract out the scenegraph and only offer a render() method,
which would just render to the current OpenGL context.

I prefer the second method, because of the simplicity of the interface;
implementation-wise the difference is small, it's more of a design question 
at what level the rendering should be encapsulated. IMO the earlier, the
better (i.e. simpler). 

 Hmm, I've seen work on branches and they have their pro's and con's. I'm 
 not sure I like branches all that much.

I think in this case a branch makes a lot of sense, because otherwise the
modifications would greatly disturb the main-branch; or I would be forced 
to hold back a gigantic monolithic patch until codingtesting has finished,
which would leave me without version control (and others wouldn't be able to
test or contribute). 

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] alternative terrain engine integration

2005-01-10 Thread Manuel Massing
Hi,

 If my memory serves, previous big changes to the codebase have been
 handled by having a conditional compilation option which switches on the
 new code (and switches off some old code if needed) and putting all changes
 in CVS HEAD. This allows people to try it if they want to, and avoids what
 I understand is the main problem of CVS branches, which is reintegration
 when it is complete.

I don't exactly need to do big changes (as in many LOC), but some intrusive
and scattered ones. Conditional compiles would be a _major_ hassle to this
end. OTOH, I have never had any notable trouble merging branches...

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] alternative terrain engine integration

2005-01-10 Thread Manuel Massing
Hi Norman,

 In the paper this appears to be based on a 'flat Earth' model
 i.e. lon lat are taken to be simple X, Y or Cos(medianX)*X,Y

 Perhaps I am missing something or you have extended the engine
 since this was written ?

I don't remember if this was mentioned in the paper, but we use
vertex shaders to simulate earth curvature (but could also be done
on the CPU). The underlying datasets are projected; for whole earth
visualisation, we split the earth into UTM zones (transverse mercator
projection). This is important in order to limit map distortions and to 
retain valid error bounds for our elevation and texture data. 
I would have to look at the projection you mentioned, but I don't think it
would be very well suited for our engine; because of its global nature there 
will inevitably be areas of high distortion. Additionally, the fact that a
single landsat-textured UTM zone is a few 100 MB in size makes a monolithic
global dataset unpractical.

 Are you folks familiar with this work
 http://globe.sintef.no/documentation/projection.html

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] alternative terrain engine integration

2005-01-10 Thread Manuel Massing
Hello Christian,

 Probalby the easiest way would be to create an independant program
 first, that communicates with FGFS via the network api.

 The benefit is a very fast start on the rendering side - w/o much needed
 internal FGFS knowledge and w/o the need to synchonize development at
 the beginning.

Thanks for the suggestion, but as the rendering engine is (mostly)
finished, there will be not too much developemnt effort on this side
(hopefully! :-)).

Given the entangledness of the simulation with terrain handling,
I don't think externalizing it via the network would be any easier... 
you would still have to set the same hooks, regardless if you couple via C++
API or the network.

bye,

 Manuel

___
Flightgear-devel mailing list
Flightgear-devel@flightgear.org
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] crease patch and Dlists - maybe answer VBOs?

2004-10-15 Thread Manuel Massing
Hi,

 O.k., now I know that VBO stands for the vertex_buffer_object OpenGL
 extention, which is, for example, _not_ present in XFree86-4.3 !
 I assume VBO is therefore not a valid choice,

The availability of the VBO extension will only depend on your drivers, not
the xserver (and I am pretty sure Mesa supports them as well). A good 
implementation would choose transparently between VBOs, VARs or displaylists
at runtime (and I agree that  its place would be in the scenegraph). 

But I guess that the main performance issue is not the rendering method, but
the amount of gl calls (e.g. displaylists calls).  20k displaylist calls per 
frame (based on statistics posted by Melchior, assuming that the allocated
displaylists are actually called) seems pretty excessive. NVidia recommends
a maximum of 1000-2000 vertex array operations per frame for good performance.

As a guideline: using VBOs or vertex array ranges, you should be able to push
about 50 million textured triangles per second on a geforce3 class card. 

bye,

 Manuel

___
Flightgear-devel mailing list
[EMAIL PROTECTED]
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] Landsat 7 data for FlightGear

2004-09-25 Thread Manuel Massing
Hello Georg,

thanks for your input!

 These screenshots are rather old and made for other purposes. But you may
 see the limits of these low-res satellite data. They are good as a very
 realistic background but you loose important landmarks (smaller streets,
 railway-tracks, smaller rivers ..). As the actual FlightGear scenery is not
 bad for VFR flights (low altitude) I would suggest that low-res satellite
 pics could serve as some background-texture with high reality effect and
 some of the actual data-set (streets, railways, runways, power-lines ...)
 are drawn on top of the satellite picture (must be synchronized). Combined
 with SRTM this would be a really great improvement.

Yes, I agree that landsat textures lack detail in low-altitude scenarios. This
is why I mentioned that it would be a good idea to add procedural textures,
procedural geometry or more detailed textures in specific regions of interest
(e.g. airports).

What I mean with procedural texture or geometry. is the creation of
detail textures and the addition of  geometric models at runtime (on the
fly!). This could be e.g. done  based on known landuse (possibly by letting a 
classifier analyze the landsat textures at scenery generation time or
runtime), and can be easily integrated with the engine LOD management (so
that detail is only generated when necessary). Additionally,  as you suggest,
it would be great to model roads, rivers, etc. derived from vector datasets. 

What is the current way of generating roads etc? Are the datasets used
consistent with the elevation data?

There are a few good reasons why global high-res textures would not be
practical IMO, above all the sheer size (a jpeg-compressed 6degx8deg UTM zone
at 1 meter resolution would consume around 200 GB!), and the problem of
consistent availability. But this is a question of data availability and
handling, given that the engine scales well.

bye,

 Manuel

___
Flightgear-devel mailing list
[EMAIL PROTECTED]
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] UK Photo Scenery

2004-09-24 Thread Manuel Massing
Hello Oliver,

 This sounds very interesting.
 But i also have some questions about this engine.
 Is it a irregular grid or regular grid engine?
 If the latter is the case how many lod levels does the engine allow
 and is there a barrier if your viewpoint is to far away from the grid?

The engine basically uses a quadtree approach to manage preprocessed geometry
tiles, where each quadtree cell contains a triangulated, irregular network
(i.e. a triangle mesh) which approximate the terrain with given, LOD dependent
accuracy. These LODs are calculated and compressed in a preprocessing stage.
For details on the simplification and rendering process, please have a look at
our paper, available at:

 http://cg.cs.uni-bonn.de/publications/publication.asp?id=194language=de

 If the lod level is high enough and there wouldn't be an altitude barrier
 then this would allow us to use such an engine for seemless planet
 rendering. In other words using a rocket in flightgear and fly into space
 or reentering earth from space using a space shuttle.
 This would also make sense for the X-15 aircraft.

As we plan to build a virtual globe simulation for our project, seamless
planet rendering will be an area I'll be working on... This will most probably
be done using a large-scale spherical model in which the UTM datasets can be
embedded (it is a good idea to chop up the earth into multiple datasets
because of distortions you inevitably get when projecting datasets; and also
to keep things modular). We already simulate earth curvature using vertex
shaders.

 Then i have another question about the Landsat datasets.
 Where can i download this true color datasets?
 All i have found was the raw Landsat dataset which was divided into 7
 channels. I merged the first 3 channels (the visible ones for blue, green
 and red) with gimp but the result was a true color picture that was somehow
 false colored, strictly speaking it was to much red and to less green
 So do you know a way how to merge these 3 channels in a way that
 the result looks really like a true color picture?

Unfortunately, to my knowledge, there are no freely availble true-color
landsat mosaics. You will have to mosaic and color-correct these landsat
datasets yourself. I have written a few tools for automated color-matching
of these datasets (albeit their is still much room for improvement), and hope
to be able to release the toolchain for the automated landsat/srtm3 dataset
generation alongside the engine source code soon.

regards,

 Manuel

___
Flightgear-devel mailing list
[EMAIL PROTECTED]
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d


Re: [Flightgear-devel] UK Photo Scenery

2004-09-24 Thread Manuel Massing
Hi,

 It sounds VERY interesting, the sample images and video look extremely
 impressive. If it does get released under the GPL, and can be integrated
 then I think it'd be a crime not to try.

 What sort of tools are available for producing the scenery?

Well, I have written a few perl scripts and C tools to download and reproject
SRTM3 and landsat data for a given utm zone (thanks, GDAL!). The landsat
images are then automatically color corrected, reprojected and mosaiced 
using a set of custom C tools. Same goes for srtm3 (without the
color correction, obviously :-) ). The generated heightfield and texture are 
then preprocessed into a quadtree data structure using 
simplificationcompression-tools (part of the terrain rendering engine). 
The whole process is automated (mostly perl driven), but can be improved upon
in a few points (especially color correction). Maybe noteworthy to add that
the processing for a UTM zone may take a few hours, depending on available
band-width and processing power.

 How difficult is it to include other 3d models in the rendering? (Things
 like aircraft, ground features, etc)

Well, this is one of the integration problems which need to be solved. I think
the easiest thing would be to encapsulate the terrain rendering into its own 
scenegraph node, and let flightgear handle the ground objects separately
(probably placing them using elevation queries or some such). 

I am not sure IIRC, but I think the separation between terrain and ground
objects would require a bit of detangling on the sides of flightgear, as
currently runways and ground objects seem to be interwoven with the
tile/terrain management.  I'd have to look into that, though - just my first
impression... 

However, it is certainly also possible to incorporate ground object rendering
into the terrain engine itself (and such a feature is planned), but I guess
might make it harder to keep the terrain rendering subsystem interchangeable. 
And I do think that it would be a good idea to couple the terrain subsystem
as loosely as possible to flightgear (e.g. to help keeping the old system
running). Maybe a few line-of-sight, intersection, elevation, and
ground type query routines might just do the job for such an API (besides the
scene graph node, of course)? 
I hope someone with inside knowledge will comment (but I'll also read a
bit into the code, promised :-)).

I am not sure, but maybe we would also need special handling for runways,
to handle incosistencys between terrain and runway elevation, or irregular
terrain underneath the runway. Easiest way would probably be to modify the
input height field prior to mesh generation (while thinking hard how to avoid
z-buffer artefacts etc.)...  How is this currently handled?

Anyway, I hope we'll be able to release the engine soon (but think weeks or
months, not days!), but I will try to get the flightgear integration work as
far as possible in the meantime...

And I certainly appreciate any insights into flightgear terrain issues
you might give :-)

cheers,

 Manuel

___
Flightgear-devel mailing list
[EMAIL PROTECTED]
http://mail.flightgear.org/mailman/listinfo/flightgear-devel
2f585eeea02e2c79d7b1d8c4963bae2d