[Flightgear-devel] Physics engine of Flightgear

2012-10-16 Thread kunai090


Hello, I’m kunai.
Now I use Flight gear but I have three
questions.
1 Is there a physics engine of Flight gear in SimGear?
2 What’s name a physics engine?
3 If I study SimGear and physics engine, what should I do?
I beg your kindness.--
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct___
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel


Re: [Flightgear-devel] Shader optimization

2012-10-16 Thread Mathias Fröhlich

Hi,

On Tuesday, October 16, 2012 15:17:04 Tim Moore wrote:
> I don't have access to a local copy of the tree at the mo', but I
> remember that this was introduced by Mathias when he added BVH.
Yes. That is to align the bounding volumes boxes as well as the drawables 
bounding boxes to the earths surface - which makes most of them smaller on 
average.
But never rely on something like this in a renderer.

Mathias

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel


Re: [Flightgear-devel] Shader optimization

2012-10-16 Thread Frederic Bouvier
> De: "James Turner"
> 
> On 16 Oct 2012, at 13:38, Tim Moore wrote:
> 
> > The tile data on disk is actually stored in a
> > coordinate system that is aligned with the earth-centric system, so
> > Z
> > points to the north pole. We rotate the coordinates back to a local
> > coordinate system because that provides a much more useful bounding
> > box for intersection testing and culling... and also lets you
> > program
> > snow lines in shaders :)
> 
> Uh, are you sure about that? My understanding is that the BTG coords
> on the disk are in 'tile local' coords, i.e 'Z is up'

BTG are in cartesian coordinates and are rotated at load time here :
http://gitorious.org/fg/simgear/blobs/next/simgear/scene/tgdb/obj.cxx#line923

Regards,
-Fred

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel


Re: [Flightgear-devel] Shader optimization

2012-10-16 Thread Tim Moore
On Tue, Oct 16, 2012 at 2:54 PM, James Turner  wrote:
>
> On 16 Oct 2012, at 13:38, Tim Moore wrote:
>
>> The tile data on disk is actually stored in a
>> coordinate system that is aligned with the earth-centric system, so Z
>> points to the north pole. We rotate the coordinates back to a local
>> coordinate system because that provides a much more useful bounding
>> box for intersection testing and culling... and also lets you program
>> snow lines in shaders :)
>
> Uh, are you sure about that? My understanding is that the BTG coords on the 
> disk are in 'tile local' coords, i.e 'Z is up'
>
> James
https://gitorious.org/fg/simgear/blobs/next/simgear/scene/tgdb/obj.cxx#line925

I don't have access to a local copy of the tree at the mo', but I
remember that this was introduced by Mathias when he added BVH.

Tim

--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel


Re: [Flightgear-devel] Shader optimization

2012-10-16 Thread James Turner

On 16 Oct 2012, at 13:38, Tim Moore wrote:

> The tile data on disk is actually stored in a
> coordinate system that is aligned with the earth-centric system, so Z
> points to the north pole. We rotate the coordinates back to a local
> coordinate system because that provides a much more useful bounding
> box for intersection testing and culling... and also lets you program
> snow lines in shaders :)

Uh, are you sure about that? My understanding is that the BTG coords on the 
disk are in 'tile local' coords, i.e 'Z is up'

James


--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel


Re: [Flightgear-devel] Shader optimization

2012-10-16 Thread Tim Moore
On Tue, Oct 16, 2012 at 12:05 PM, Renk Thorsten  wrote:
>> One can assume that
>> a vec4 varying is no more expensive than a vec3.
> (...)
>> I'm not sure it's useful to think of each component of a varying
>> vector as a "varying" i.e., three vec3 varying values use up three
>> varying "slots," and so do 3 float varying values
>
> I dunno...
>
> Just counting the number of operations, mathematically the best case scenario 
> for interpolating a general vector across a triangle is in Cartesian 
> coordinates where each coordinate interpolates as an independent number, so 
> the cost of a vec4 would be the same as the cost for 4 floats. In any other 
> case, like curved coordinates or Minkowski space, a Jacobian comes to bite 
> and the vector is more expensive than just 4 scalar numbers.

Yes, I acknowledge that interpolating a vec4 requires more operations
than interpolating a float :)

>
> Now, what I don't know if there's some fancy hardware trick which makes a 
> Cartesian vec4 as cheap as a float. In this case, we could use this by 
> combining every four varying float into one varying vec4 and get the same job 
> done for 25% of the cost. But...

That's the crux of it. I thought the answer was obvious, but it very
much depends on the hardware. For a very long time graphics hardware
has had to rasterize, i.e., interpolate, multiple values across screen
space: depth, color, alpha, texture coordinates I just assumed
that it would be no more expensive to interpolate vector values.
However, this very good web page,
http://fgiesen.wordpress.com/2011/07/10/a-trip-through-the-graphics-pipeline-2011-part-8/,
contains this quote:

Update: Marco Salvi points out (in the comments below) that while
there used to be dedicated interpolators, by now the trend is towards
just having them return the barycentric coordinates to plug into the
plane equations. The actual evaluation (two multiply-adds per
attribute) can be done in the shader unit.

So the cost of interpolating  values is indeed incurred as operations
in the (prolog of the) fragment shader. Even the oldest hardware that
supports OpenGL programmable shaders implements vector operations, and
a vector multiply-add has, as far as I know, the same cost as a scalar
operation. On the other hand, the shader compiler might be able to
combine multiple scalar interpolations into vector ops. You can
examine the assembly language for shaders if you want to see what's
actually going on.

I do recommend that web page and the others in the series; they are
quite interesting.

>
> ... the thing I did try is that in adapting the urban shader to atmospheric 
> scattering I ran out of varying slots, I needed two more varying float. I 
> solved this by deleting one varying vec3 (the binormal) and computing it as 
> the cross product - and that gave me the two slots I needed (and presumably 
> one left, but I didn't try that). So this would suggest that indeed each 
> vector component counts the same as a varying float.

They do at the OpenGL API level, which doesn't necessarily correspond
to the hardware implementation.

>
>
>> One reason to pass this as a varying is that on old hardware, GeForce
>> 7 and earlier, it is very expensive to change a uniform that is used
>> by a fragment shader. It forces the shader to be recompiled. So, this
>> is actually a well-known optimization for old machines.
>
> Okay, I didn't know that... But pretty much all weather and 
> environment-dependent stuff (ground haze functions, the wave amplitude for 
> the water shader, overcast haze for the skydome,...) makes use of slowly but 
> continuously changing uniforms (I think gl_LightSource is technically also a 
> uniform), so it doesn't really make sense to have this old machine friendly 
> code in one place in the shader but not in other places in the same shader.
>

True.

>> Also, I want to point out that, in your example, lightdir is in the
>> local coordinate system of the terrain, if in fact you are shading
>> terrain. I would call "world space" the earth-centric coordinate
>> system in which the camera orientation is defined.
>
> gl_Vertex is in some coordinate system which I've usually encountered as 
> 'world space' in shader texts as opposed to gl_Position which is supposed to 
> contain the vertex coordinates in 'eye space'. I realize that gl_Vertex is 
> *not* in the global (xyz) coordinates of Flightgear Earth, although I don't 
> know how the two relate.  Somehow once in the shader world, z is always up... 
> Just a matter of semantics?

I think more usual usage for the local coordinate system is "model
coordinates." The model matrix transforms those coordinates into world
coordinates; the view matrix transforms world coordinates into eye
coordinates. In OpenGL, even in pre-shader days, we tend not to talk
about "world" space much because there is (was) only one matrix stack,
which contains the concatenation of the model and view matrices.

"z is always up" is a matter of conve

Re: [Flightgear-devel] Shader optimization

2012-10-16 Thread Renk Thorsten
> One can assume that
> a vec4 varying is no more expensive than a vec3.
(...)
> I'm not sure it's useful to think of each component of a varying
> vector as a "varying" i.e., three vec3 varying values use up three
> varying "slots," and so do 3 float varying values

I dunno... 

Just counting the number of operations, mathematically the best case scenario 
for interpolating a general vector across a triangle is in Cartesian 
coordinates where each coordinate interpolates as an independent number, so the 
cost of a vec4 would be the same as the cost for 4 floats. In any other case, 
like curved coordinates or Minkowski space, a Jacobian comes to bite and the 
vector is more expensive than just 4 scalar numbers.

Now, what I don't know if there's some fancy hardware trick which makes a 
Cartesian vec4 as cheap as a float. In this case, we could use this by 
combining every four varying float into one varying vec4 and get the same job 
done for 25% of the cost. But...

... the thing I did try is that in adapting the urban shader to atmospheric 
scattering I ran out of varying slots, I needed two more varying float. I 
solved this by deleting one varying vec3 (the binormal) and computing it as the 
cross product - and that gave me the two slots I needed (and presumably one 
left, but I didn't try that). So this would suggest that indeed each vector 
component counts the same as a varying float.


> One reason to pass this as a varying is that on old hardware, GeForce
> 7 and earlier, it is very expensive to change a uniform that is used
> by a fragment shader. It forces the shader to be recompiled. So, this
> is actually a well-known optimization for old machines.

Okay, I didn't know that... But pretty much all weather and 
environment-dependent stuff (ground haze functions, the wave amplitude for the 
water shader, overcast haze for the skydome,...) makes use of slowly but 
continuously changing uniforms (I think gl_LightSource is technically also a 
uniform), so it doesn't really make sense to have this old machine friendly 
code in one place in the shader but not in other places in the same shader.

> Also, I want to point out that, in your example, lightdir is in the
> local coordinate system of the terrain, if in fact you are shading
> terrain. I would call "world space" the earth-centric coordinate
> system in which the camera orientation is defined.

gl_Vertex is in some coordinate system which I've usually encountered as 'world 
space' in shader texts as opposed to gl_Position which is supposed to contain 
the vertex coordinates in 'eye space'. I realize that gl_Vertex is *not* in the 
global (xyz) coordinates of Flightgear Earth, although I don't know how the two 
relate.  Somehow once in the shader world, z is always up... Just a matter of 
semantics? 


> I don't think that any varyings -- except for the fragment coordinates
> -- are mandatory, except perhaps on very old hardware. 

I remember at least one text claiming that once you use ftransform(), 
gl_FrontColor and gl_BackColor get values as well - but that source or myself 
may be mistaken.

Cheers,

* Thorsten
--
Don't let slow site performance ruin your business. Deploy New Relic APM
Deploy New Relic app performance management and know exactly
what is happening inside your Ruby, Python, PHP, Java, and .NET app
Try New Relic at no cost today and get our sweet Data Nerd shirt too!
http://p.sf.net/sfu/newrelic-dev2dev
___
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel


Re: [Flightgear-devel] Shader optimization

2012-10-16 Thread Tim Moore
Thanks for writing this up. I have a couple of comments, nitpicks really.

On Mon, Oct 15, 2012 at 9:43 AM, Renk Thorsten  wrote:
>
> I thought it might be a good idea to write up a few things I've tried 
> recently and not seen in widespread use - so that either others know about 
> them as well or I can find out what the pitfalls are.
>
> Basically this is about reducing the number of varyings, which is desirable 
> for at least two reasons. First, their total amout is quite limited (I think 
> 32?). Second, they cause work per vertex and per pixel, so their load scales 
> always with the current bottleneck.  Their actual workload is just a linear 
> interpolation across a triangle though, so the optimization I'm talking about 
> here is maybe all together 10-20% gains, not something dramatic, and it's not 
> unconditionally superior to save a varying if the additional workload in the 
> fragment shader is substantial.
>
> Also, the techniques are somewhat 'dirty' in the sense that they make it a 
> bit harder to understand what is happening inside the shader.
>
> * making use of gl_FrontColor and gl_BackColor -> gl_Color
>
> As far as I know, these are built-in varyings which are already there 
> regardless if we use them or not. So if we don't use them at all because all 
> color
I don't think that any varyings -- except for the fragment coordinates
-- are mandatory, except perhaps on very old hardware. Generally the
total shader program is optimized to remove any unnecessary
computations. However...

>computations are in the fragment shader, they can carry four components of 
>geometry, if we use a color but know the alpha, there is one varying which 
>>can be saved by using gl_Color.a to encode it.

I agree that using a coordinate in a varying that you already need is
a good trick, better than assigning a new varying. One can assume that
a vec4 varying is no more expensive than a vec3.

By the way, we only encode the front/back facing info in alpha in
order to get around shader language bugs.

>
> The prime example is terrain rendering where we know that the alpha channel 
> is always 1.0 since the terrain mesh is never transparent. In 
> default.vert/frag gl_Color.a is used to transport the information if a 
> surface is front or backfacing, but in terrain rendering we know we're always 
> above the mesh, so all surfaces we see are front-facing, and we do backface 
> culling in any case.
...
> * light in classic rendering
>
> Leaving Rembrandt aside, the direction of the light source (the sun) is not a 
> varying but actually a uniform. In case we need this in world space in the 
> fragment shader, doing a
> lightdir = normalize(vec3(gl_ModelViewMatrixInverse * 
> gl_LightSource[0].position));
> in the vertex shader and passing this as varying vec3 is quite an overkill.

One reason to pass this as a varying is that on old hardware, GeForce
7 and earlier, it is very expensive to change a uniform that is used
by a fragment shader. It forces the shader to be recompiled. So, this
is actually a well-known optimization for old machines.

Also, I want to point out that, in your example, lightdir is in the
local coordinate system of the terrain, if in fact you are shading
terrain. I would call "world space" the earth-centric coordinate
system in which the camera orientation is defined.
>
> Due to the complexity of the coordinate system of the terrain, it's not clear 
> to me how to get the world space light direction really into a uniform, but 
> we do have it's z-component (the sun angle above the horizon) as a property 
> and can use this as uniform. Since light direction is a unit vector, it means 
> that only the polar angle of the light needs to be passed as a varying then, 
> saving two components.

We could include per-tile uniforms as state attributes in the scene
graph if we decide that we really want them..
>
> In particular for water reflections computed in world space, passing normal, 
> view direction and light direction in world coordinates from the vertex 
> shader (9 varying) is really not efficient - the normal of water surfaces in 
> world space is (0,0,1) and not varying at all (we do have formally water on 
> steep surfaces in the terrain, but we never render this correct in any case 
> since in reality rivers don't run up and down mountainslopes and foam when 
> they run really fast on slopes, and to worry about getting light reflection 
> wrong when the whole setup is wrong is a bit academic), the light direction 
> is really just the polar angle, since we later dot everyting with the normal 
> we really only need the z-component of the half-vector, and that means just 
> two components of the view direction - so it can in principle be done with 3 
> varyings rather than 9.

I'm not sure it's useful to think of each component of a varying
vector as a "varying" i.e., three vec3 varying values use up three
varying "slots," and so do 3 float varying values. On the other hand,
if you can b