> It would need to take in all of the, by now well known, variables -  
> making it by no means a simple beast to manage.
>
> If this could be automated in some way it would be much easier to  
> capture, and then submit, consistent data.

Let's think this from the end. What would we like to know?

I'm usually interested in very specific before/after questions. For instance, I 
can push some shader code into a conditional clause and benchmark this to run 
faster on my system. I'd like to know - does it generalize? I've learned that 
optimization seems to generalize across nvidia hardware, but I'd like to get 
feedback in a before/after situation from a Radeon user.

Or, system dependent optimizations. Stuart has introduced a cloud LOD system 
and has some framerate gain from it in overcast layers. I've been playing with 
it and couldn't get much clear difference in performance, so I just switched it 
off completely. What I'd be interested in is - for what hardware do we see 
framerate gain, and what LOD distances would people typically select in order 
to get a good balance between visuals and framerate. Or would they prefer to 
vary cloud density, or cloud visibility radius? If we would know what most 
people select if given the choice, we could set reasonable defaults and 
structure the GUI accordingly.

In the present case - take my suspicion that I'm running something wrong 
because I have the gut feeling Rembrandt should be running faster - what I 
really would like to know is if I am indeed doing something wrong (James, 
please let me know if you find anything...). It's difficult to see how a 
performance benchmark test would answer the question unless you code in all 
possible causes of low performance already.

For these questions, a standardized benchmark isn't so terribly useful, because 
I'm at least partially after how users would adjust a balance if they start on 
it, or I have a very specific question evaluating a specific change. A 
standardized benchmark would be, if we get enough data, be more of a general 
warning system - suppose we regularly monitor performance on 50 different 
systems, and after some commit we see 20% performance drop on 35 of them - 
that's indicate that the commit might be in some way problematic. But for this, 
we would require a regular time history - basically the monitoring script 
should run and report after every update of either FG or the drivers.

Well, that's what I would like to know - what kind of information would others 
like to have?

* Thorsten
------------------------------------------------------------------------------
This SF.net email is sponsored by Windows:

Build for Windows Store.

http://p.sf.net/sfu/windows-dev2dev
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to