On Wednesday 05 Aug 2009, Curtis Olson wrote:
> On Tue, Aug 4, 2009 at 1:21 PM, leee wrote:
> > Fair enough.  To be honest, the question was at the limits of
> > my understanding.  What inspired it though is that when I'm
> > rendering any of my 3D stuff the rendering process is
> > distributed across several systems - the single scene is split
> > into many separate boxes, and because the performance of the
> > different systems in my render farm varies widely, I typically
> > split the scene up into 20x10 boxes.  Now this is all
> > ray-traced software rendering, not hardware rendering, and
> > there is an additional overhead because the scene needs to be
> > split up and the subsequent results combined, but the bottom
> > line is that this technique can allow much more processing
> > power to be used and certainly enough to compensate for the
> > overhead.  I've honestly no idea though, how far, or even if
> > this technique can be applied to h/w rendering.
>
> Ray tracing (based on my fading college experience) is an awesome
> example of a task that can be parallelized very well.  If each
> node has the entire scene definition, then the individual nodes
> can render individual pixels with no need to communicate with
> other nodes. (That may not be quite entirely true (?) if more
> advanced rendering techniques are being used, but I'm just
> guessing.)  So at the start you just divide up the pixels amongst
> your render farm and let the go.  Once all the pixels are
> rendered, you can just assemble them into the completed image.

I think you've pretty much got it right, except that I don't think 
there's any need for internode comms.  More advanced techniques 
seem to boil down to rendering different channels separately as 
separate passes i.e. you might render the specular, refelection and 
shadow channels in separate passes, but even then I don't think 
there's any need internode comms.  Other options are to render 
different frames on different nodes, or if you're doing 3D stuff, 
to render alternate fields, which in some ways is just a special 
case of box rendering.  Like I said, I've no idea as to just how 
these different techniques might be applied to h/w rendering, but I 
am sure that some solution will be found - perhaps the SLI and 
Crossfire solutions are a start on this, but it's just 
incomprehensible that h/w rendering is going to be limited to a 
single process forever.

>
> > I'm not really thinking in terms of 'threading' at all, which I
> > think is a very limited and half-house sort of technique.  But
> > neither though do I think it needs to be thought of as a pure
> > real time system.  Rather, I'm thinking in terms of the
> > external FDM mechanism already present in FG.  Running the FDM
> > on it's own hardware system doesn't need to be any more real
> > time than the FDM running within FG on the same system but
> > because it's not going to be limited by the frame rate it could
> > safely be run much faster and with proportionately more
> > consistency than with FG.  If you're running it at say 100Hz
> > within FG I would expect to be able to run it several times
> > faster, if not tens of times faster if the system it was
> > running on wasn't spending most of its time rendering. You'll
> > still get a variation ...
>
> Maybe a lot more variation than you would expect ... especially
> if other things are running on that core at the same time...

Well, I think that argument applies equally to running FG as a 
single process on any system.  Just as you wouldn't expect to run 
the current FG architecture on a system running other stuff too, 
and expect it to work well, you wouldn't expect to run the 
distributed subsystems on their own systems and not expect them to 
be impacted by other workloads.

>
> > in the rate that the FDM runs at but I
> > suspect that the variation would be about the same in absolute
> > terms.  Let's say that if we get a variation of +/- 10
> > iteration difference per second running within FG it would
> > probably be about the same running on its own system, but as
> > we're running at a higher rate the difference is proprotionally
> > smaller, perhaps down from 10% to 1%.
>
> If you sync to the vblank signal in FlightGear (and have enough
> cpu/graphics hp) you can run at a very solid 60hz (or whatever
> rate your display refreshes at.)  If you don't quite have that
> amount of hp consistently for all situations, there is a
> throttle-hz property you can set to force a slower update rate
> (maybe 30 or 20 fps ... ideally you want an even divider into
> your display update rate.)  If consistent frame rates are your
> goal, there are ways to achieve that.  However, because of the
> variability of systems and personal preferences, we don't turn a
> lot of this on by default.

Heh - I can't help seeing that last sentence as a tacit admission 
that FG _is_ pushing the limits of commodity h/w.  It's great if 
people can afford the latest and greatest kit and dump their old 
stuff (to me, hopefully, so I can slot them into my render farm - 
I've actually got one PII-350 MHz system in it - sure, it doesn't 
contribute very much, but it all helps) but it's not a _good_ 
solution, especially when their new kit is likely to have 4-6 
cores, and next year we'll be seeing 12 core CPUs. 

> > Like I say, I don't think we need to achieve strict real time
> > processing here, but we could achieve both higher rates and
> > proportionally smaller variations in those rates using the
> > existing timing techniques in FG.
>
> And I would argue that this is a true statement, even without
> changing a line of code within the project.  There are a lot of
> system configuration and application configuration things a
> person can do to achieve these goals.  I'm not sure you could
> improve on that by splitting the FDM off to a separate
> core/task/process.
>
> One issue you are probably seeing is that even though the
> FlightGear flight dynamics engines are setup to run at a
> guaranteed fixed rate of 120hz already, the autopilot update rate
> floats with the graphics update rate. Ideally the autopilot would
> update at the same rate as the flight dynamics. This was the case
> at one point in the project, but somehow that got lost during
> some portion of some restructure project.

I think that being able to run the FDM, autopilot and Nasal at 
several hundred to a thousand+ Hz, instead of just around 120 Hz 
would be quite a big improvement.  The fact that the autopilot 
subsystem got slightly borked and hasn't really been fixed since 
shows to me that FG has outstanding quality issues.

> > I absolutely agree with you - it's finding the parallelism
> > that's the hardest part but things like the FDM, autopilot and
> > Nasal do seem like obvious candidates.  Even if we can't
> > precisely balance the load, we could still improve the
> > performance of parts of FG. Sure, processors are getting faster,
> > if not by increases in clock speed, which has largely reached or
> > are rapidly approaching a plateau, then by improving performance
> > per clock and VLIW, but that too can only be taken so far. 
> > Barring quantum computing, which is still questionable,
> > parallelism is really the only way to go.
>
> We have been nervous that CPU speeds are nearing a plateau ever
> since I've been aware of computers.  Parallelism is a quick way
> to jump ahead in performance.  But moving something out of the
> main FlightGear thread that only consumes an itty bitty tiny
> portion of the overall CPU load doesn't buy us much improvement,
> but may cost a *lot* in terms of effort and potential headaches
> of self inflicted bugs ... that's the point I wish to
> communicate.  Parallelism is good, but we need to use our tools
> wisely in order to achieve our end goals.

If the people who actually make CPUs think that parallelism is the 
way to go, I wouldn't hold out too much hope on significant speed 
increases in the future.  Sure, things will get faster, but not at 
the same rate we've seen so far.

I think that splitting FG into several parts would actually help 
reduce bugs, not increase them.  Bugs would be limited to their 
particular subsystems and couldn't manifest themselves in other 
parts of the system, as they can do with a single monolithic 
system.  Each discrete subsystem can only pass data back and forth, 
not bugs.

>
> > Threading again ;-)  best avoided in my opinion, pretty much
> > for the reasons you give.  Instead of thinking of FG as a
> > single threaded application, it needs to be a collection of
> > standalone programs that run collaboratively - go back to
> > thinking in terms of the external FDM option.
>
> Well there are plenty of downsides to that approach too. 
> Operating system overhead of process and context switching. 
> Potential communication bottlenecks and overhead, system specific
> dependencies, management issues of starting and ultimately
> cleaning up a bunch of independent processes on a system.  User
> configuration issues of trying to set this up optimally on their
> own particular collection of hardware ... again, not that these
> can't be solved, but there could be a tremendous amount of work
> to make it happen, and in many cases, for not much (if any) gain,
> maybe an overall loss in some cases where communication bandwidth
> between modules is by necessity very high.

Yes, there are certainly overheads to be accounted for in 
distributed systems and comms bandwidth is a very significant 
issue, but once again, many of the things you've cited apply 
equally to a single system running a monolithic FG.  I agree that 
these issues could be solved, and more to the point, yes, it would 
be a tremendous amount of work.  Even if everyone was enthusiastic 
about the idea I'd guess it would take several years to get a 
prototype working.  And then we'd find various unforeseen issues 
and might even have to re-design yet again.  In the end though, I 
just don't really think there's any other alternative.  The h/w 
world is betting on parallelism for the future and software has got 
to fit the h/w.

I don't think anyone really _likes_ the idea of the extra work 
involved - it's going to be difficult and hard work, but living in 
the past isn't going to work either.

>
> > Well, I don't see it as asking people to no longer act as
> > willing volunteers but rather asking people to volunteer to
> > work on specific problems or issues.  Sure, some people will
> > only be interested in implementing new 'cool' features, but
> > others will know that there is a degree of responsibility
> > attached to being allowed to perform on the FG stage.  It would
> > be a very sad thing if an FG developer abandoned FG in a sulk
> > because they couldn't do exactly what they wanted and nothing
> > else - that's just take with no give - and in any case, the FG
> > developers don't seem to be so selfish and small minded.  If
> > there is a real need, and I believe that there is, I'd like to
> > think the the FG developers are mature enough to accept it.
> >
> > Of course, money and sponsorship could help, but it doesn't
> > mean that it's impossible any other way.
> >
> > I've got an awful lot of fun and satisfaction from FG but I do
> > think it has some problems that it's having trouble facing up
> > to.  I want to see FG getting better and better, both in terms
> > of features and quality but I can't see it happening without
> > facing up to those problems.
>
> Step 1: point out the problem.  Step 2: face up to the problem. 
> Step 3: find a solution to the problem.  Step 4: implement the
> solution to the problem.
>
> Regards,
>
> Curt.

There you go - you've just made it sound easy ;-)

LeeE

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Flightgear-devel mailing list
Flightgear-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/flightgear-devel

Reply via email to