Yes "time management" is a good idea. 

Looking at the documentation here I see no mention of the (likely) inventor of 
the idea -- John McCarthy ca 1962-3, or the most adventurous early design to 
actually use the idea (outside of AI robots/agents work) -- David Reed in his 
1978 MIT thesis "A Network Operating System". 


Viewpoints implemented a strong "real-time enough" version of Reed's ideas 
about 10 years ago -- "Croquet"


The ALSP blurb on Wikipedia does mention the PARC Pup Protocol and Netword (the 
"Internet" before the Internet).

Cheers,

Alan




>________________________________
> From: David Barbour <[email protected]>
>To: Fundamentals of New Computing <[email protected]> 
>Sent: Monday, April 9, 2012 9:44 AM
>Subject: Re: [fonc] Everything You Know (about Parallel Programming) Is 
>Wrong!: A Wild Screed about the Future
> 
>
>Going back to this post (to avoid distraction), I note that
>
>
>Aggregate Level Simulation Protocol
>   and its successor
>High Level Architecture
>
>
>Both provide "time management" to achieve consistency, i.e. "so that the times 
>for all simulations appear the same to users and so that event causality is 
>maintained – events should occur in the same sequence in all simulations."
>
>
>You should not conclude for simulations that it is easier to spawn a process 
>than to serialize things. You'll end up spawning a process AND serializing 
>things.
>
>
>Regards,
>
>
>Dave
>
>
>
>
>http://en.wikipedia.org/wiki/Aggregate_Level_Simulation_Protocol 
>http://en.wikipedia.org/wiki/High_Level_Architecture_(simulation) 
>
>
>
>The ALSP page goes into more detail on how this is achieved. HLA started as 
>the merging of Distributed Interactive Simulation (DIS) with ALSP. 
>
>
>
>On Tue, Apr 3, 2012 at 8:02 AM, Miles Fidelman <[email protected]> 
>wrote:
>
>Steven Robertson wrote:
>>
>>On Tue, Apr 3, 2012 at 7:23 AM, Tom Novelli<[email protected]>  wrote:
>>>
>>>Even if there does turn out to be a simple and general way to do parallel
>>>>programming, there'll always be tradeoffs weighing against it - energy usage
>>>>and design complexity, to name two obvious ones.
>>>>
To design complexity: you have to be kidding.  For huge classes of problems - 
anything that's remotely transactional or event driven, simulation, gaming come 
to mind immediately - it's far easier to conceptualize as spawning a process 
than trying to serialize things.  The stumbling block has always been context 
switching overhead.  That problem goes away as your hardware becomes massively 
parallel.
>>
>>Miles Fidelman
>>
>>-- 
>>In theory, there is no difference between theory and practice.
>>In practice, there is.   .... Yogi Berra
>>
>>
>>
>>_______________________________________________
>>fonc mailing list
>>[email protected]
>>http://vpri.org/mailman/listinfo/fonc
>>
>
>
>
>-- 
>bringing s-words to a pen fight
>
>_______________________________________________
>fonc mailing list
>[email protected]
>http://vpri.org/mailman/listinfo/fonc
>
>
>
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to