Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread BGB

On 4/8/2012 8:26 PM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:
Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.




ok, so basically a heterogeneous MMO. and distributed




well, yes, but I am not entirely sure how many non-distributed 
(single server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to 
deal with all of the users.


some older MMOs had shards, where users on one server wouldn't be 
able to see what users on a different server were doing, but this is 
AFAIK generally not really considered acceptable in current MMOs 
(hence why the world would be divided up into areas or regions 
instead, presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of 
what a distributed-system is than one which allows a load-balanced 
client/server architecture.


Running on a cluster is very different between having all the 
intelligence on the individual clients.  As far as I can tell, MMOs by 
and large run most of the simulation on centralized clusters (or at 
least within the vendor's cloud).  Military sims do EVERYTHING on the 
clients - there are no central machines, just the information 
distribution protocol layer.


yes, but there are probably drawbacks with this performance-wise and 
reliability wise.


not that all of the servers need to be run in a single location or be 
owned by a single company, but there are some general advantages to the 
client/server model.





reading some stuff (an overview for the DIS protocol, ...), it 
seems that the level of abstraction is in some ways a bit higher 
(than game protocols I am familiar with), for example, it will 
indicate the entity type in the protocol, rather than, say, the 
name of, its 3D model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image 
generation and position models maintained by dead reckoning) - what 
goes across the network are changes to it's velocity vector, and 
weapon fire events.  The intent is to minimize the amount of data 
that has to be sent across the net, and to maintain speed of image 
generation by doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the 
server?...


Well, render might be the wrong term here.  Think more about map 
tiling.  When you do map applications, the GIS server sends out map 
tiles.  Similarly, at least some MMOs do most of the scene generation 
centrally.  For that matter, think about moving around Google Earth in 
image mode - the data is still coming from Google servers.


The military simulators come from a legacy of flight simulators - VERY 
high resolution imagery, very fast movement.  Before the simulation 
starts, terrain data and imagery are distributed in advance - every 
simulator has all the data needed to generate an out-the-window view, 
and to do terrain calculations (e.g., line-of-sight) locally.




ok, so sending polygons and images over the net.

so, by very, is the implication that they are sending large numbers of 
1024x1024 or 4096x4096 texture-maps/tiles or similar?...


typically, I do most texture art at 256x256 or 512x512.

but, anyways, presumably JPEG or similar could probably make it work.


ironically, all this leads to more MMOs using client-side physics, 
and more FPS games using server-side physics, with an MMO generally 
having a much bigger problem regarding cheating than an FPS.


For the military stuff, it all comes down to compute load and network 
bandwidth/latency considerations - you simply can't move enough data 
around, quickly enough to support high-res. out-the-window imagery for 
a pilot pulling a 2g turn.  Hence you have to do all that locally.  
Cheating is less of an issue, since these are generally highly managed 
scenarios conducted as training exercises.  What's more of an issue is 
if the software in one sim. draws different conclusions than the 
software in an other sim. (e.g., two planes in a dogfight, each 
concluding that it shot down the other one) - that's usually the 
result of a design bug rather than cheating (though Capt. Kirk's I 
don't believe in the no win scenario line comes to mind).




this is why most modern games use client/server.

some older games (such as Doom-based games) determined things like AI 
behaviors and damage on each 

Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-04-09 Thread David Barbour
Going back to this post (to avoid distraction), I note that

Aggregate Level Simulation Protocol
   and its successor
High Level Architecture

Both provide time management to achieve consistency, i.e. so that the
times for all simulations appear the same to users and so that event
causality is maintained – events should occur in the same sequence in all
simulations.

You should not conclude for simulations that it is easier to spawn a
process than to serialize things. You'll end up spawning a process AND
serializing things.

Regards,

Dave


http://en.wikipedia.org/wiki/Aggregate_Level_Simulation_Protocol
http://en.wikipedia.org/wiki/High_Level_Architecture_(simulation)

The ALSP page goes into more detail on how this is achieved. HLA started as
the merging of Distributed Interactive Simulation (DIS) with ALSP.


On Tue, Apr 3, 2012 at 8:02 AM, Miles Fidelman
mfidel...@meetinghouse.netwrote:

 Steven Robertson wrote:

 On Tue, Apr 3, 2012 at 7:23 AM, Tom Novellitnove...@gmail.com  wrote:

 Even if there does turn out to be a simple and general way to do parallel
 programming, there'll always be tradeoffs weighing against it - energy
 usage
 and design complexity, to name two obvious ones.

 To design complexity: you have to be kidding.  For huge classes of
 problems - anything that's remotely transactional or event driven,
 simulation, gaming come to mind immediately - it's far easier to
 conceptualize as spawning a process than trying to serialize things.  The
 stumbling block has always been context switching overhead.  That problem
 goes away as your hardware becomes massively parallel.

 Miles Fidelman

 --
 In theory, there is no difference between theory and practice.
 In practice, there is.    Yogi Berra


 __**_
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc




-- 
bringing s-words to a pen fight
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-04-09 Thread Miles Fidelman

David Barbour wrote:

Going back to this post (to avoid distraction), I note that

Aggregate Level Simulation Protocol
and its successor
High Level Architecture

Both provide time management to achieve consistency, i.e. so that 
the times for all simulations appear the same to users and so that 
event causality is maintained – events should occur in the same 
sequence in all simulations.


and Alan Kay wrote:


Yes time management is a good idea.

Looking at the documentation here I see no mention of the (likely) 
inventor of the idea -- John McCarthy ca 1962-3, or the most 
adventurous early design to actually use the idea (outside of AI 
robots/agents work) -- David Reed in his 1978 MIT thesis A Network 
Operating System.


Viewpoints implemented a strong real-time enough version of Reed's 
ideas about 10 years ago -- Croquet


The ALSP blurb on Wikipedia does mention the PARC Pup Protocol and 
Netword (the Internet before the Internet).


Actually, last time I looked (admittedly about 4 years, when I last 
worked for a firm that sold low-level libraries for simulation 
protocols, both DIS and HLA), larger network sims (e.g., the Air Force's 
Distributed Mission Training net) used DIS (Distributed Internet 
Simulation) rather than HLA.


The DIS protocol is essentially distributed information stuffed into 
multi-cast UDP packets. HLA is a an object oriented middleware - basic 
model is that objects are replicated across sims, with background 
protocols keeping those copies in sync - essentially publish-subscribe 
applied to replicated objects. As I remember, HLA just proves to 
complicated and injects too much overhead as the number of players on 
the net grows.


As I recall, DIS PDUs contain time stamps referenced to exercise time, 
which is generally synchronized with GPS time. Part of the problem with 
using HLA for larger sims has to do with time synchronization, 
particularly if using the time management service for synchronization 
(one now has to wait for the time management events to traverse the 
network) - and there are also various options the different simulators 
can/do use for synchronizing with exercise time. It all becomes a 
management nightmare. Lots of papers on the topic, though.


Miles










--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-04-09 Thread David Barbour
Distributed time-management can be problematic for scaling.

There are solutions for it. They involve structuring communication so time
management can be performed locally and incrementally rather than as a
global pass. Lightweight time warp protocol does this with a little
hierarchy. My reactive demand programming model manages time point-to-point
(part of managing signal updates) in order to remain scalable, but hasn't
been validated in practice yet.

Time warp protocol (~1985) was also designed for parallel and distributed
simulations. It uses optimistic parallelism with rollback when a message
occurs out of order. But time warp was not really designed for interactive
simulations where intermediate results might be observed and action might
be taken by an agent outside the protocol, more for scientific computing.
Lightweight time warp (~2009) is far more suitable for interactive systems,
though still not designed for real-time interaction.

HLA sounds pretty big, and I'm not familiar with any of it. I don't know
whether the right solutions for time management scalability issues would
conflict with some other mandate in HLA.

From what you say, the architects of HLA did see the problems with naive
time management... but, rather than solving the problems architecturally,
they simply provided a bunch of configuration options of dubious benefit
then called it `your` problem. Unfortunately, that sounds like situation
normal to me.

Regards,

Dave

On Mon, Apr 9, 2012 at 11:29 AM, Miles Fidelman
mfidel...@meetinghouse.netwrote:

 David Barbour wrote:

 Going back to this post (to avoid distraction), I note that

 Aggregate Level Simulation Protocol
 and its successor
 High Level Architecture

 Both provide time management to achieve consistency, i.e. so that the
 times for all simulations appear the same to users and so that event
 causality is maintained – events should occur in the same sequence in all
 simulations.


 and Alan Kay wrote:

  Yes time management is a good idea.

 Looking at the documentation here I see no mention of the (likely)
 inventor of the idea -- John McCarthy ca 1962-3, or the most adventurous
 early design to actually use the idea (outside of AI robots/agents work) --
 David Reed in his 1978 MIT thesis A Network Operating System.

 Viewpoints implemented a strong real-time enough version of Reed's
 ideas about 10 years ago -- Croquet

 The ALSP blurb on Wikipedia does mention the PARC Pup Protocol and
 Netword (the Internet before the Internet).


 Actually, last time I looked (admittedly about 4 years, when I last worked
 for a firm that sold low-level libraries for simulation protocols, both DIS
 and HLA), larger network sims (e.g., the Air Force's Distributed Mission
 Training net) used DIS (Distributed Internet Simulation) rather than HLA.

 The DIS protocol is essentially distributed information stuffed into
 multi-cast UDP packets. HLA is a an object oriented middleware - basic
 model is that objects are replicated across sims, with background protocols
 keeping those copies in sync - essentially publish-subscribe applied to
 replicated objects. As I remember, HLA just proves to complicated and
 injects too much overhead as the number of players on the net grows.

 As I recall, DIS PDUs contain time stamps referenced to exercise time,
 which is generally synchronized with GPS time. Part of the problem with
 using HLA for larger sims has to do with time synchronization, particularly
 if using the time management service for synchronization (one now has to
 wait for the time management events to traverse the network) - and there
 are also various options the different simulators can/do use for
 synchronizing with exercise time. It all becomes a management nightmare.
 Lots of papers on the topic, though.

 Miles











 --
 In theory, there is no difference between theory and practice.
 In practice, there is.    Yogi Berra


 __**_
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/**listinfo/fonchttp://vpri.org/mailman/listinfo/fonc




-- 
bringing s-words to a pen fight
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Kernel Maru

2012-04-09 Thread Ian Piumarta
Dear Julian,

On Apr 9, 2012, at 19:40 , Julian Leviston wrote:

 Also, simply, what are the semantic inadequacies of LISP that the Maru 
 paper refers to (http://piumarta.com/freeco11/freeco11-piumarta-oecm.pdf)? I 
 read the footnoted article (The Influence of the Designer on the Design—J. 
 McCarthy and Lisp), but it didn't elucidate things very much for me.

Here is a list that remains commented in my TeX file but which was never 
expanded with justifications and inserted into the final version.  (The ACM 
insisted that a paper published online, for download only, be strictly limited 
to five pages -- go figure!)

%%   Difficulties and omissions arise
%%   involving function-valued arguments, application of function-valued
%%   non-atomic expressions, inconsistent evaluation rules for arguments,
%%   shadowing of local by global bindings, the disjoint value spaces for
%%   functions and symbolic expressions, etc.

IIRC these all remain in the evaluator published in the first part of the 
LISP-1.5 Manual.

 I have to say that all of these papers and works are making me feel like a 3 
 year old making his first steps into understanding about the world. I guess I 
 must be learning, because this is the feeling I've always had when I've been 
 growing, yet I don't feel like I have any semblance of a grasp on any part of 
 it, really... which bothers me a lot.

My suggestion would be to forget everything that has been confusing you and 
begin again with the LISP-1.5 Manual (and maybe Recursive Functions of 
Symbolic Expressions and Their Computation by Machine).  Then pick your 
favourite superfast-prototyping programming language and build McCarthy's 
evaluator in it.  (This step is not optional if you want to understand 
properly.)  Then throw some expressions containing higher-order functions and 
free variables at it, figure out why it behaves oddly, and fix it without 
adding any conceptual complexity.

A weekend or two should be enough for all of this.  At the end of it you will 
understand profoundly why most of the things that bothered you were bothering 
you, and you will never be bothered by them again.  Anything that remains 
bothersome might be caused by trying to think of Common Lisp as a 
dynamically-evaluated language, rather than a compiled one.

(FWIW: Subsequently fixing every significant asymmetry, semantic irregularity 
and immutable special case that you can find in your evaluator should lead you 
to something not entirely unlike the tiny evaluator at the heart of the 
original Maru.)

Regards,
Ian

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc