On Tue, Apr 3, 2012 at 7:55 PM, Miles Fidelman <[email protected]>wrote:
> >> You seem to be starting from the assumption that process per object is a >> good thing. >> > > absolutely - I come from a networking background - you spawn a process for > everything - it's conceptually simpler all around - and as far as I can > tell, most things that complex systems are inherently parallel > > having to serialize things that are inherently parallel (IMHO) only makes > sense if you're forced to by constraints of the run-time environment I agree there is a lot of latent parallelism in most systems. But process-per-object is not necessarily the best way to express that parallelism. It certainly won't be the most efficient way to express parallelism, and in practice - unless you don't care about quality of your simulator - you'll be adding complexity to regain consistency (i.e. so tanks don't keep doing things after they should have been blown up because a message was lost or out of order). That's what I meant. Each loop has to touch each object, every simulation > cycle, to see if anything has happened that needs attention. It's been a > while, but as I recall it's something like > 1. loop one, update all lines of sight > 2. loop two, update all sensors > 3. loop three, calculate weapons effects > 4. loop four, update positions > 5. loop five, update graphic display > 6. repeat ad nauseum > In the pipeline model I was suggesting, all five loops run pipeline-parallel, and we can have data-parallelism within each loop (e.g. running 100 tanks at a time in OpenCL). So I look at that loop and see easy potential for 500x parallelism without any loss to determinism or consistency and with much lower overheads. > > If I were building one of these, in say Erlang, if a tank is doing > nothing, it simply sits and does nothing. If it's moving, maybe a > heartbeat kicks off a recalculation every so often. Everything else is a > matter of event-driven activities. > The sequential approach pretty much forces things down a path of touching > everything, frequently, in a synchronous manner. It's easy enough to just maintain different collections for different processing. I.e. rather than one big collection of tanks, you can have five lists - tanks that are moving, tanks that are targeting, etc. - with pointers back to the master list. But it's often more efficient to just keep a mode in each tank and process them all (e.g. skip targeting for those not in firing mode) every instant. A heartbeat every so often for motion? 20Hz is already a latency of 50ms to begin reacting to changes in the environment, and is borderline adequate for a combat simulator. What are you imagining, exactly? You seem hell bent on premature optimization there; don't be surprised if you get the hell without the optimization. > Asynchronous, event-driven behavior is a LOT easier to conceptualize and > code. Or maybe you're only conceptualizing an idealized version of the code, the happy path... (Except for line-of-sight calculations.) > And collision-detection and a bunch of other things... > > By the way, as soon as you network simulators (at least military ones, not > necessarily MMORPGs) it's very much a parallel, asynchronous world. That really depends on the communication protocol, of course. There are many ways it could have been achieved. > > It all just works. Not all that new either - dates back to BBN's SIMNET > stuff back in the late 1980s. > It works for some purposes. But I've sat in on one presentation that expressed frustration about rewind, replay-with-variation, etc. features and the inability to leverage simulators as testing software. > > FYI: It was having a number of our coders tell me that there's just too > much context-switching to consider an asynchronous architecture, that led > me to discover Erlang - one of the few (perhaps the only) environments that > support massive concurrency. There's actually quite a few languages today that support massive concurrency. And there are languages to support massive parallelism (like OpenCL). >> I think there are simple, parallel approaches. I know there are >> simplistic, parallel approaches. >> > Not to be impolite, but what point are you trying to make? > Your approach to parallelism strikes me as simplistic. Like saying Earth is in center of Solar system. Sun goes around Earth. It sounds simple. It's "easy to conceptualize". Oh, and it requires epicyclic orbits to account for every other planet. Doesn't sound so simple anymore. Like this, simplistic becomes a complexity multiplier in disguise. You propose actor per object. It sounds simple to you, and "easy to conceptualize". But now programmers have challenges to control latency, support replay, testing, maintenance, verification, consistency. This is in addition to problems hand-waved through like line-of-sight and collision detection. It doesn't sound so simple anymore. The old sequential model, or even the pipeline technique I suggest, do not contradict the known, working structure for consistency. > "... For huge classes of problems - anything that's remotely > transactional or event driven, simulation, gaming come to mind immediately > - it's far easier to conceptualize as spawning a process than trying to > serialize things. The stumbling block has always been context switching > overhead. That problem goes away as your hardware becomes massively > parallel. " > > Are you arguing that: > a) such problems are NOT easier to conceptualize as parallel and > asynchronous, or, > b) parallelism is NOT removing obstacles to taking actor-like approaches > to these classes of problems, or > c) something else? I would argue all three. a) you selectively conceptualize only part of the system - an idealized happy path. It is much more difficult to conceptualize your whole system - i.e. all those sad paths you created but ignored. Many simulators have collision detection, soft real-time latency constraints, and consistency requirements. It is not easy to conceptualize how your system achieves these. b) parallelism is not concurrency; it does not suggest actor-like approaches. Pipeline and data parallelism are well proven alternatives used in real practice. There are many others, which I have mentioned before. c) performance - context-switching overhead - isn't the most important stumbling block. consistency, correctness, complexity are each more important. Like you, I believe we can achieve parallel designs while improving simplicity. But I think I will eschew turning tanks into actors. Regards, Dave -- bringing s-words to a pen fight
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
