Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-19 Thread Josh Gargus

On Apr 12, 2012, at 5:12 PM, BGB wrote:

 On 4/11/2012 11:14 PM, Josh Gargus wrote:
 On Apr 8, 2012, at 7:31 PM, BGB wrote:
 
 now, why, exactly, would anyone consider doing rendering on the server?...
 
 One reason might be to amortize the cost of  global illumination 
 calculations.  Since much of the computation is view-independent, a Really 
 Big Server could compute this once per frame and use the results to render a 
 frame from the viewpoint of each connected client.  Then, encode it with 
 H.264 and send it downstream.  The total number of watts used could be much 
 smaller, and the software architecture could be much simpler.
 
 I suspect that this is what OnLive is aiming for... supporting existing 
 PC/console games is an interim step as they try to boot-strap a platform 
 with enough users to encourage game developers to make this leap.
 
 but, the bandwidth and latency requirements would be terrible...

What do you mean by terrible?  1MB/s is quite good quality video.  Depending on 
the type of game, up to 100ms of latency is OK.


 
 nevermind that currently, AFAIK, no HW exists which can do full-scene 
 global-illumination in real-time (at least using radiosity or similar),

You somewhat contradict yourself below, when you argue that clients can already 
do small-scale real-time global illumination (no fair to argue that it's not 
computationally tractable on the server, but it can already be done on the 
client).

Also, Nvidia could churn out such hardware in one product cycle, if it saw a 
market for it.  Contrast this to the uncertainty of how long well have to wait 
for the hypothetical battery breakthrough that you mention below.


 much less handle this *and* do all of the 3D rendering for a potentially 
 arbitrarily large number of connected clients.

Just to be clear, I've been making an implicit assumption about these 
hypothetical ultra-realistic game worlds: that the number of FLOPs spent on 
physics/GI would be 1-2 orders of magnitude greater than the FLOPs to render 
the scene from a particular viewpoint.  If this is true, then it's not so 
expensive to render each additional client.  If it's false, then everything I'm 
saying is nonsense.


 another problem is that there isn't much in the rendering process which can 
 be aggregated between clients which isn't already done (between frames, or 
 ahead-of-time) in current games.

I'm explicitly not talking about current games.


 
 in effect, the rendering costs at the datacenter are likely to scale linearly 
 with the number of connected clients, rather than at some shallower curve.

Asymptotically, yes it would be linear, except for the big chunk of 
global-illumination / physics simulation that could be amortized.  And the 
higher you push the fidelity of the rendering, the bigger this chunk to be 
amortized.


 
 much better I think is just following the current route:
 getting client PCs to have much better HW, so that they can do their own 
 localized lighting calculations (direct illumination can already be done in 
 real-time, and global illumination can be done small-scale in real-time).

I understand, that's what you think :-)


 
 the cost at the datacenters is also likely to be much lower, since they need 
 much less powerful servers, and have to spend much less money on electricity 
 and bandwidth.

Money spent on electricity and bandwidth is irrelevant, as long as there is a 
business model that generates revenue that grows (at least) linearly with 
resource usage.  I'm speculating that such a business model might be possible.


 
 likewise, the total watts used tends to be fairly insignificant for an end 
 user (except when operating on batteries), since PC power-use requirements 
 are small vs, say, air-conditioners or refrigerators, whereas people running 
 data-centers have to deal with the full brunt of the power-bill.

See above.


 
 the power-use issue (for mobile devices) could, just as easily, be solved by 
 some sort of much higher-capacity battery technology (say, a laptop or 
 cell-phone battery which, somehow, had a capacity well into the kVA range...).

It would have to be a huge breakthrough.  Desktop GPUs are still (at least) an 
order of magnitude too slow for this type of simulation, and they draw 200W.  
This is roughly 2 orders of magnitude greater than an iPad.  And then there's 
the question of heat dissipation.

It's still a good point.  I never meant to imply that a server-rendering 
video-streaming architecture is be-all-end-all-optimal, but your point brings 
this into clearer focus.


 
 at this point, people wont really care much if, say, plugging in their 
 cell-phone to recharge is drawing, say, several amps, given power is 
 relatively cheap in the greater scheme of things (and, assuming migration 
 away from fossil fuels, could likely still get considerably cheaper over 
 time).
 
 meanwhile, no obvious current/near-term technology is likely to make internet 
 bandwidth considerably 

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread BGB

On 4/8/2012 8:26 PM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:
Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.




ok, so basically a heterogeneous MMO. and distributed




well, yes, but I am not entirely sure how many non-distributed 
(single server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to 
deal with all of the users.


some older MMOs had shards, where users on one server wouldn't be 
able to see what users on a different server were doing, but this is 
AFAIK generally not really considered acceptable in current MMOs 
(hence why the world would be divided up into areas or regions 
instead, presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of 
what a distributed-system is than one which allows a load-balanced 
client/server architecture.


Running on a cluster is very different between having all the 
intelligence on the individual clients.  As far as I can tell, MMOs by 
and large run most of the simulation on centralized clusters (or at 
least within the vendor's cloud).  Military sims do EVERYTHING on the 
clients - there are no central machines, just the information 
distribution protocol layer.


yes, but there are probably drawbacks with this performance-wise and 
reliability wise.


not that all of the servers need to be run in a single location or be 
owned by a single company, but there are some general advantages to the 
client/server model.





reading some stuff (an overview for the DIS protocol, ...), it 
seems that the level of abstraction is in some ways a bit higher 
(than game protocols I am familiar with), for example, it will 
indicate the entity type in the protocol, rather than, say, the 
name of, its 3D model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image 
generation and position models maintained by dead reckoning) - what 
goes across the network are changes to it's velocity vector, and 
weapon fire events.  The intent is to minimize the amount of data 
that has to be sent across the net, and to maintain speed of image 
generation by doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the 
server?...


Well, render might be the wrong term here.  Think more about map 
tiling.  When you do map applications, the GIS server sends out map 
tiles.  Similarly, at least some MMOs do most of the scene generation 
centrally.  For that matter, think about moving around Google Earth in 
image mode - the data is still coming from Google servers.


The military simulators come from a legacy of flight simulators - VERY 
high resolution imagery, very fast movement.  Before the simulation 
starts, terrain data and imagery are distributed in advance - every 
simulator has all the data needed to generate an out-the-window view, 
and to do terrain calculations (e.g., line-of-sight) locally.




ok, so sending polygons and images over the net.

so, by very, is the implication that they are sending large numbers of 
1024x1024 or 4096x4096 texture-maps/tiles or similar?...


typically, I do most texture art at 256x256 or 512x512.

but, anyways, presumably JPEG or similar could probably make it work.


ironically, all this leads to more MMOs using client-side physics, 
and more FPS games using server-side physics, with an MMO generally 
having a much bigger problem regarding cheating than an FPS.


For the military stuff, it all comes down to compute load and network 
bandwidth/latency considerations - you simply can't move enough data 
around, quickly enough to support high-res. out-the-window imagery for 
a pilot pulling a 2g turn.  Hence you have to do all that locally.  
Cheating is less of an issue, since these are generally highly managed 
scenarios conducted as training exercises.  What's more of an issue is 
if the software in one sim. draws different conclusions than the 
software in an other sim. (e.g., two planes in a dogfight, each 
concluding that it shot down the other one) - that's usually the 
result of a design bug rather than cheating (though Capt. Kirk's I 
don't believe in the no win scenario line comes to mind).




this is why most modern games use client/server.

some older games (such as Doom-based games) determined things like AI 
behaviors and damage on each 

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-08 Thread Miles Fidelman

BGB wrote:

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:
Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.




ok, so basically a heterogeneous MMO. and distributed




well, yes, but I am not entirely sure how many non-distributed (single 
server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to deal 
with all of the users.


some older MMOs had shards, where users on one server wouldn't be 
able to see what users on a different server were doing, but this is 
AFAIK generally not really considered acceptable in current MMOs 
(hence why the world would be divided up into areas or regions 
instead, presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of 
what a distributed-system is than one which allows a load-balanced 
client/server architecture.


Running on a cluster is very different between having all the 
intelligence on the individual clients.  As far as I can tell, MMOs by 
and large run most of the simulation on centralized clusters (or at 
least within the vendor's cloud).  Military sims do EVERYTHING on the 
clients - there are no central machines, just the information 
distribution protocol layer.





reading some stuff (an overview for the DIS protocol, ...), it seems 
that the level of abstraction is in some ways a bit higher (than 
game protocols I am familiar with), for example, it will indicate 
the entity type in the protocol, rather than, say, the name of, 
its 3D model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image 
generation and position models maintained by dead reckoning) - what 
goes across the network are changes to it's velocity vector, and 
weapon fire events.  The intent is to minimize the amount of data 
that has to be sent across the net, and to maintain speed of image 
generation by doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the 
server?...


Well, render might be the wrong term here.  Think more about map 
tiling.  When you do map applications, the GIS server sends out map 
tiles.  Similarly, at least some MMOs do most of the scene generation 
centrally.  For that matter, think about moving around Google Earth in 
image mode - the data is still coming from Google servers.


The military simulators come from a legacy of flight simulators - VERY 
high resolution imagery, very fast movement.  Before the simulation 
starts, terrain data and imagery are distributed in advance - every 
simulator has all the data needed to generate an out-the-window view, 
and to do terrain calculations (e.g., line-of-sight) locally.


ironically, all this leads to more MMOs using client-side physics, and 
more FPS games using server-side physics, with an MMO generally having 
a much bigger problem regarding cheating than an FPS.


For the military stuff, it all comes down to compute load and network 
bandwidth/latency considerations - you simply can't move enough data 
around, quickly enough to support high-res. out-the-window imagery for a 
pilot pulling a 2g turn.  Hence you have to do all that locally.  
Cheating is less of an issue, since these are generally highly managed 
scenarios conducted as training exercises.  What's more of an issue is 
if the software in one sim. draws different conclusions than the 
software in an other sim. (e.g., two planes in a dogfight, each 
concluding that it shot down the other one) - that's usually the result 
of a design bug rather than cheating (though Capt. Kirk's I don't 
believe in the no win scenario line comes to mind).




There's been a LOT of work over the years, in the field of 
distributed simulation.  It's ALL about scaling, and most of the 
issues have to do with time-critical, cpu-intensive calcuations.




possibly, but I meant in terms of the scalability of using 
load-balanced servers (divided by area) and server-to-server message 
passing.


Nope.  Network latencies and bandwidth are the issue.  Just a little bit 
of jigger in the timing and pilots tend to hurl all over the 
simulators.  We're talking about repainting a high-res. display between 
20 to 40 times per second - you've got to drive that locally.




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/3/2012 9:29 PM, Miles Fidelman wrote:

BGB wrote:


On 4/3/2012 10:47 AM, Miles Fidelman wrote:


Hah.  You've obviously never been involved in building a CGF 
simulator (Computer Generated Forces) - absolute spaghetti code when 
you have to have 4 main loops, touch 2000 objects (say 2000 tanks) 
every simulation frame.  Comparatively trivial if each tank is 
modeled as a process or actor and you run asynchronously.


I have not encountered this term before, but does it have anything to 
do with an RBDE (Rigid Body Dynamics Engine), or often called simply 
a physics engine. this would be something like Havok or ODE or 
Bullet or similar.


There is some overlap, but only some - for example, when modeling 
objects in flight (e.g., a plane flying at constant velocity, or an 
artillery shell in flight) - but for the most part, the objects being 
modeled are active, and making decisions (e.g., a plane or tank, with 
a simulated pilot, and often with the option of putting a 
person-in-the-loop).


So it's really impossible to model these things from the outside 
(forces acting on objects), but more from the inside (run 
decision-making code for each object).




fair enough...

but, yes, very often in cases where one is using a physics engine, this 
may be combined with the use of internal logic and forces as well, 
albeit admittedly there is a split:
technically, these forces are applied directly by whatever code is using 
the physics engine, rather than by the physics engine itself.


for example: just because it is a physics engine doesn't mean that it 
necessarily has to be realistic, or that objects can't supply their 
own forces.


I guess, however, that this would be closer to the main server end in 
my case, namely the part that manages the entity system and NPC AIs and 
similar (and, also, the game logic is more FPS style).


still not heard the term CGF before though.


in this case, the basic timestep update is basically to loop over all 
the entities in the scene and calls their think methods and similar 
(things like AI and animation and similar are generally handled via 
think methods and similar), and maybe do things like updating physics 
(if relevant), ...


this process is single threaded with a single loop though.

I guess it is arguably event-driven though:
handling timing is done via events (think being a special case);
most interactions between entities involve events as well;
...

many entities and AIs are themselves essentially finite-state-machines.


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/4/2012 9:29 AM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 6:35 AM, Miles Fidelman wrote:

BGB wrote:


still not heard the term CGF before though.
If you do military simulations, CGF (Computer Generated Forces) and 
SAF (Semi-Automated Forces) are the equivalent terms of art to game 
engine.  Sort of.




military simulations as in RTS (Real Time Strategy) or similar, or 
something different?... (or, maybe even like realistic simulations 
used by actual military, rather than for purposes of gaming?...).


Well, there are really two types of simulations in use in the military 
(at least that I'm familiar with):


- very detailed engineering models of various sorts (ranging from 
device simulations to simulations of say, a sea-skimming missile vs. a 
gattling gun point-defense weapon).  (think MATLAB and SIMULINK type 
models)




don't know much all that much about MATLAB or SIMULINK, but do know 
about things like FEM (Finite Element Method) and CFD (Computational 
Fluid Dynamics) and similar.


(left out a bunch of stuff, mostly about FEM, CFD, and particle systems, 
in games technology and wondering about how some of this stuff compares 
with their analogues as used in an engineering context).



- game-like simulations (which I'm more familiar with): but these are 
serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and tactics, 
and so forth; or pilots training in team techniques by flying missions 
in a networked simulator (and saving jet fuel); or decision makers 
practicing in simulated command posts -- simulators take the form of 
both person-in-the-loop (e.g., flight sim. with a real pilot) and 
CGF/SAF (an enemy brigade is simulated, with information inserted into 
the simulation network so enemy forces show up on radar screens, 
heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



so, sort of like: this stuff is to gaming what IBM mainframes are to PCs?...

I had mostly heard about military people doing all of this stuff using 
decommissioned vehicles and paintball and similar, but either way.


I guess game-like simulations are probably cheaper.





Wikipedia hasn't been being very helpful here regarding a lot of this 
(it doesn't seem to know about most of these terms).


well, it does know about game engines and RTS though.


Maybe check out 
http://www.mak.com/products/simulate/computer-generated-forces.html 
for an example of a CGF.




looked briefly, yes, ok.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread Miles Fidelman

BGB wrote:

On 4/4/2012 9:29 AM, Miles Fidelman wrote:

- game-like simulations (which I'm more familiar with): but these are 
serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and tactics, 
and so forth; or pilots training in team techniques by flying 
missions in a networked simulator (and saving jet fuel); or decision 
makers practicing in simulated command posts -- simulators take the 
form of both person-in-the-loop (e.g., flight sim. with a real pilot) 
and CGF/SAF (an enemy brigade is simulated, with information inserted 
into the simulation network so enemy forces show up on radar screens, 
heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



so, sort of like: this stuff is to gaming what IBM mainframes are to 
PCs?...


Not so sure.  Probably similar levels of complexity between a military 
sim. and, say, World of Warcraft.  Fidelity to real-world behavior is 
more important, and network latency matters for the extreme real-time 
stuff (e.g., networked dogfights at Mach 2), but other than that, IP 
networks, gaming class PCs at the endpoints, serious graphics 
processors.  Also more of a need for interoperability - as there are 
lots of different simulations, plugged together into lots of different 
exercises and training scenarios - vs. a MMORPG controlled by a single 
company.


I had mostly heard about military people doing all of this stuff using 
decommissioned vehicles and paintball and similar, but either way.


I guess game-like simulations are probably cheaper.
In terms of jet fuel, travel costs, and other logistics, absolutely.  
But... when you figure in the huge dollars spent paying large systems 
integrators to write software, I'm not sure how much cheaper it all 
becomes.  (The big systems integrators are not known for brilliance of 
their coders, or efficiencies in their process -- not a lot of 20-hour 
days, by 20-somethings betting on their stock options.  A lot of good 
people, but older, slower, more likely to put family first; plus a lot 
of organizational overhead built into the prices.)




--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-03 Thread BGB
(changed subject, as this was much more about physics simulation than 
about concurrency).


yes, this is a big long personal history dump type thing, please 
ignore if you don't care.



On 4/3/2012 10:47 AM, Miles Fidelman wrote:

David Barbour wrote:


Control flow is a source of much implicit state and accidental 
complexity.


A step processing approach at 20Hz isn't all bad, though, since at 
least you can understand the behavior of each frame in terms of the 
current graph of objects. The only problem with it is that this 
technique doesn't scale. There are easily up to 15 orders of 
magnitude in update frequency between slow-updating and fast-updating 
data structures. Object graphs are similarly heterogeneous in many 
other dimensions - trust and security policy, for example.


Hah.  You've obviously never been involved in building a CGF simulator 
(Computer Generated Forces) - absolute spaghetti code when you have to 
have 4 main loops, touch 2000 objects (say 2000 tanks) every 
simulation frame.  Comparatively trivial if each tank is modeled as a 
process or actor and you run asynchronously.




I have not encountered this term before, but does it have anything to do 
with an RBDE (Rigid Body Dynamics Engine), or often called simply a 
physics engine.

this would be something like Havok or ODE or Bullet or similar.

I have written such an engine before, but my effort was single-threaded 
(using a fixed-frequency virtual timer, with time-step subdivision to 
deal with fast-moving objects).


probably would turn a bit messy though if it had to be made internally 
multithreaded (it is bad enough just trying to deal with irregular 
timesteps, blarg...).


however, it was originally considered to potentially run in a separate 
thread from the main 3D engine, but I never really bothered as there 
turned out to not be much point.



granted, one could likely still parallelize it while keeping everything 
frame-locked though, like having the threads essentially just subdivide 
the scene-graph and each work on a certain part of the scene, doing the 
usual thing of all of them predicting/handling contacts within a single 
time step, and then all updating positions in-sync, and preparing for 
the next frame.


in the above scenario, the main cost would likely be how to best go 
about efficiently dividing up work among the threads (the usual strategy 
I use is work-queues, but I have doubts regarding their scalability).


side note:
in my own experience, simply naively handling/updating all objects 
in-sequence doesn't tend to work out very well when mixed with things 
like contact forces (example: check if object can make move, if so, 
update position, move on to next object, ...). although, this does work 
reasonably well for Quake-style physics (where objects merely update 
positions linearly, and have no actual contact forces).


better seems to be:
for all moving objects, predict where the object wants to be in the next 
frame;

determine which objects will collide with each other;
calculate contact forces and apply these to objects;
update movement predictions;
apply movement updates.

however, interpenetration is still not avoided (sufficient forces will 
still essentially push objects into each other). theoretically, one can 
disallow interpenetration (by doing like Quake-style physics and simply 
disallow any post-contact updates which would result in subsequent 
interpenetration), but in my prior attempts to enable such a feature, 
the objects would often become stuck and seemingly entirely unable to 
move, and were in-fact far more prone to violently explode (a pile of 
objects will seemingly become stuck-together and immovable, maybe for 
several seconds, until ultimately all of them will violently explode 
outward at high velocities).


allowing objects to interpenetrate was thus seen as the lesser evil, 
since, even though objects were violating the basic assumption that 
rigid bodies aren't allowed to exist in the same place at the same 
time, typically (assuming the collision-detection and force-calculation 
functions are working correctly, itself easier said than done), this 
will generally correct itself reasonably quickly (the contact forces 
will push the objects back apart, until they reach a sort of 
equilibrium), and with far less incidence of random explosions.


sadly, the whole physics engine ended up a little rubbery as a result 
of all of this, but it seemed reasonable, as I have also observed 
similar behavior to some extent in Havok, and have figured out that I 
could deal with matters well enough by using a simpler (Quake-style) 
physics engine for most non-dynamic objects. IOW: things using AABBs 
(Axis-Aligned Bounding-Box) and similar, and other related solid 
objects which can't undergo rotation, a very naive check and update 
strategy works fairly well for objects which can only ever undergo 
translational movement.


admittedly, I also never was able to get constraints