On Thursday 30 November 2006 01:36, Tim Moore wrote:
> Try http://www.bricoworks.com/moore/lightpt3.diff instead. A last-minute
> typo disabled point sprites.
This is still faster with point sprites reenabled?

I do not want to remove the old implementation that was happening completely 
on the GPU in favour to an CPU based one if we end up slower.

Anyway, can we keep the old implementation instead of just a plain OpenGL 
point based one. That means the old one that used triangles that are backface 
culled and draw points for the front side where two of them are transparent?

Like I stated before in some private mails I would like to have the osg 
version only as an alternative to the old implementation if it is faster than 
the GPU/triangle based one. May be not exactly the old implementation but an 
implementation that does nothing on the CPU but does all lightging decisions 
on the GPU.

And as also told before I would like to have an other alternative for the main 
usage where we still do that light intensity decision - together with more 
advanced light texturing dependent on fog density, distance and  other neat 
parameters - on the *GPU*. Just use a vertex shader for the view direction 
dependence of the light and fragment shaders for more advanced halos. That 
will require a newer OpenGL implementation and for that I believe we need to 
keep a lighting version for older boards. Also this all happens in the *GPU*. 
That has the advantage that it is probably way faster and even if it is about 
at fast as on the CPU, it does not block CPU cycles that can equally well be 
spent for more advanced physical models or better AI traffic or whatever we 
need to do on the CPU.
So on the longer term I will favour that shader based approach as default as 
long as the GPU supports it.

For that we still need some factory methods that will provide now the old 
implementation or the osg::LightPoint based one and later when I have the 
time to merge my tests into simgear the shader based one.

So we need to he able to decide between implementations based on capabilities 
of the GPU and settings from the user anyway. Can we set up together such an 
infrastructure and put the old triangle based approach as other alternative 
In the long term I believe that we will reduce that back to two alternatives 
again. The shader based and one of the low end card capable ones - whichever 
of the low end card implementation is faster. For that decision we need 
*both* up again and optimized well ...

I will look at the current patch that weekend.



Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
Flightgear-devel mailing list

Reply via email to