Hi Tim,

I did not find the time to look into that this weekend. These evenings are my 

On Thursday 30 November 2006 10:03, Tim Moore wrote:
> I'm not sure about "faster" but, as far as I can tell, "equivalent." By
> the way, it would be very nice to have some more timing information
> available than just the frame rate. I don't know if the easiest way to
> get there is to move to Producer so we can get its nice rendering stage
> statistics, wait for Robert to duplicate that in his new osgViewer
> stuff, do it ourselves, or what.
Yep, that must happen at some time that we have that cull and draw times on 
the screen ...

> > I do not want to remove the old implementation that was happening
> > completely on the GPU in favour to an CPU based one if we end up slower.
> >
> > Anyway, can we keep the old implementation instead of just a plain OpenGL
> > point based one. That means the old one that used triangles that are
> > backface culled and draw points for the front side where two of them are
> > transparent?
> I don't mind adding the old implementation back in as an option, except
> for the VASI lights, where it really would have no advantage over the
> OSG lights. One would lose some features, such as point size scaled by
> intensity, fading alpha with distance, and blink sequence animations for
> the approach lights.
No, I do not think that we loose such features completely, we just need to 
implement exactly what we need ourselves. The VASI in the osgSim 
implementation has some problems. The size of the lights are something that I 
do not comment on here. The just need to be adjusted somehow.
But I have objections in the basic way it is done.
We had a smooth transition phase where the lights went smooth from red to 
white. The osgSim one just switches hard. I do also not see how the osgSim 
api will allow to change the color of a point in a smooth way. We can just 
render both in a transition area and hope that blending does our trick - but 
that is a really bad idea IMO.
So especially for the vasi I do definitely want to use an own implementation 
that has nothing to do with osgSim.

Also the current osgSim implemntation does directional lights with a hard 
switch off when the eye point moves beyond 90 deg to the normal.
The previous implementation faded away nicely. That could be done with 
osgSim's sector but I still wonder if it is then slower...

The only place where I can see osgSim as a *real* benefit is for the rabbit 
time sequences.

Given that I would like to have a common appearance for all lights I would 
favor doing all of them ourselves.

> > Like I stated before in some private mails I would like to have the osg
> > version only as an alternative to the old implementation if it is faster
> > than the GPU/triangle based one. May be not exactly the old
> > implementation but an implementation that does nothing on the CPU but
> > does all lightging decisions on the GPU.
> I would like to see some comparisons between the two approaches on a
> low-end machine. You may be overestimating the cost of the CPU based
> approach and underestimating the costs of triangle approach. As I said 
> in private email, the old approach uses a fairly exotic rendering path
> -- polygons rendered in point mode with a texture environment -- which
> very well may be done on the CPU on a "low-end" machine.
I don't think that I overestimate that - we just have plenty of those lights. 
What we definitely gain if we put such computations onto the GPU is 
additional CPU time where we can do something we cannot do on the GPU - not 
yet, but definitely then when we have cull and draw on an other cpu.

... hold on, I see, this also holds if the draw cpu will compute the light 
intensities instead of waiting for the gpu to do that.
In the single threaded case we gain nothing with this argument ...
Hmm ...

> Yes, I think that is a great project, but more work than I have time for
> right at the moment. It would be good for people to learn about OSG's
> support for shading programs *hint*hint* :)
I will send you my hackery on that today evening.

> > For that we still need some factory methods that will provide now the old
> > implementation or the osg::LightPoint based one and later when I have the
> > time to merge my tests into simgear the shader based one.
> >
> > So we need to he able to decide between implementations based on
> > capabilities of the GPU and settings from the user anyway. Can we set up
> > together such an infrastructure and put the old triangle based approach
> > as other alternative below?
> "We?" :) Seriously, I would be happy to look at getting the OpenGL
> extensions available from OSG and making that available to fg in an
> elegant way, but goes way beyond to scope of light points, so for now
> exposing the choice to the user is probably the best way. Automatically
> benchmarking the user's machine and putting it into slow /fast
> categories would be an interesting project too.
Hmm. I believe that what osg does for us in this area is silently switch of 
features that the OpenGL implementation cannot handle.
But for the case where this strategy does render our world wrong if a feature 
is missing we need to do something ourselves.
That does not mean that we will again call the gl versioning/extension stuff 
ourselves but that means that we should ask osg if it believes that we can 
use shaders for example and deny doing any shader based approach in favor of 
an alternative implementation that renders correctly without that particular 



Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
Flightgear-devel mailing list

Reply via email to