> One example that strikes me is the c172p, though I'm biased as one of the
> maintainers of the aircraft, and it is rated accurately according to
> your criteria :)

Compared with, say, the A-10, the F-14b or the Tu-154b (which is not in
the GIT repository) - how would you rate the c172p cockpit? Would you say
that it has the same quality, would you say that it is better or worse?


> I'm with Martin and Vivian on this - I don't believe that
> photo-textures add much to the
> "wow" factor

Please take a look at the aircraft which actually are at the top of the
list. They don't necessarily use (as far as I can tell) photographs as
textures, but they resemble a photograph rather than rendered polygons -
textures show rust, wear and tear, gauges show glass reflections and so
on. I think the c172p could get a 'wow!' factor that way, rather than by
using actual photos as textures. I don't know if I messed up the word -
'photo-realistic' doesn't mean 'photo-texture' in what I wanted to say, it
means that the cockpit screenshot resembles a photograph of the cockpit.

> and I also agree with Vivian that this isn't really a
> particularly good indication
> of quality. However, beauty is in the eye of the beholder ;)

*sigh* Correlation is not causation - I seem to be unable to get that
point across.

There is no causal relationship between visual detail and quality, i.e. it
is *theoretically* possible to make a model which scores high in visual
detail, but is low quality. *In practice*, it turns out that I find my
(vague) idea of the quality of the model usually close to the visual
detail.

To give an example - the quality of the instrumentation/procedures in the
c172p I would rate with a full 10 of 10. The visual detail has 7 - that's
just 3 points away. If that generalizes, it means that if you pick an
aircraft with visuals 8, you are never really going to be completely
disappointed by its systems, so while the list would not really correspond
to a quality rating, it would give you some useful indication of how the
quality list is going to look like.

About the worst failure of the scheme I'm aware of is the Concorde to
which I would assign 10 for procedures and instrumentation, but have rated
5 in visuals - in all other cases I know of, visuals and
instrumentation/procedures are typically no more than 2, rarely 3 points
different. From a different perspective - take again a look at the top of
the list - the IAR 80 has emergency procedures to get out the gear without
power, the MiG-15bis has a detailed startup procedure, models stresses on
airframe, you can overheat the engine, the F-14b comes with the seeking
missiles and a really detailed radar system - I can't really see that the
scheme has moved aircraft to the top which 'just look pretty' and wouldn't
have a good measure of quality to them as well.

So while I am aware that there is no reason that visual detail and quality
*must* correlate, I find that in practice they do. Which means that the
list works better in practice than theoretical considerations a priori
would suggest. That's something I did not know before making it, nor did I
expect it, but I realized it after the fact. Is that easier to understand?


> As Curt and others have pointed out, there certainly is a place for
> rating aircraft to
> publicise those that are hidden gems, and your list is certainly going
> to make me look at some new aircraft.

Well, you just made me happy :-)

> I think a more fruitful approach would be to formalize various rating
> requirements,
> such that anyone can evaluate an aircraft against largely objective
> criteria. This would
> remove the need for one person to evaluate all the aircraft, which as
> you've pointed out
> is a herculean task. Such ratings would certainly need to include
> cockpit quality, and
> your criteria would form a good basis, even if we disagree on the
> importance of
> photo-textures :)

Hm, after skimming the page, there's basically a set of ko criteria for
many of the suggested schemes. I've thought about this for a while, and as
far as I understand, any scheme which could sort the whole set of aircraft
needs to be fair, needs to generalize and needs to be viable. if you're
interested in discussing only a subset of 20 aircraft, then much more
involved schemes are possible.

'fair' means that every aircraft is judged the same way - which means
either by a single person (or a group of persons with averaging the
opinions), or by a set of sufficiently well defined objective criteria as
you state above.

'generalization' means that one needs to be able to apply it to (almost)
every aircraft in the repository. Judging realism based on first-hand
experience in real aircraft is a good criterion and works really well for
a number of aircraft, but it doesn't generalize (and it isn't necessarily
'fair') - I'm guessing we have a serious lack of people who fly supersonic
jets on a regular basis.

And 'viability' means that it must be doable in a realistic amount of time
- a 8 hour evaluation profile to judge the FDM can be done for 20 planes,
but not for 400.

Applying these three requirements to proposed ideas cuts things pretty
much down. Which is why I came up with such a dumb 'visual' scheme in the
first place :-)

Cheers,

* Thorsten



------------------------------------------------------------------------------
Increase Visibility of Your 3D Game App & Earn a Chance To Win $500!
Tap into the largest installed PC base & get more eyes on your game by
optimizing for Intel(R) Graphics Technology. Get started today with the
Intel(R) Software Partner Program. Five $500 cash prizes are up for grabs.
http://p.sf.net/sfu/intelisp-dev2dev
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to