On 4/8/2012 8:26 PM, Miles Fidelman wrote:
BGB wrote:
On 4/4/2012 5:26 PM, Miles Fidelman wrote:
BGB wrote:
Not so sure. Probably similar levels of complexity between a military sim. and, say, World of Warcraft. Fidelity to real-world behavior is more important, and network latency matters for the extreme real-time stuff (e.g., networked dogfights at Mach 2), but other than that, IP networks, gaming class PCs at the endpoints, serious graphics processors. Also more of a need for interoperability - as there are lots of different simulations, plugged together into lots of different exercises and training scenarios - vs. a MMORPG controlled by a single company.


ok, so basically a heterogeneous MMO. and distributed


well, yes, but I am not entirely sure how many non-distributed (single server) MMO's there are in the first place.

presumably, the world has to be split between multiple servers to deal with all of the users.

some older MMOs had "shards", where users on one server wouldn't be able to see what users on a different server were doing, but this is AFAIK generally not really considered acceptable in current MMOs (hence why the world would be divided up into "areas" or "regions" instead, presumably with some sort of load-balancing and similar).

unless of course, this is operating under a different assumption of what a distributed-system is than one which allows a load-balanced client/server architecture.

Running on a cluster is very different between having all the intelligence on the individual clients. As far as I can tell, MMOs by and large run most of the simulation on centralized clusters (or at least within the vendor's cloud). Military sims do EVERYTHING on the clients - there are no central machines, just the information distribution protocol layer.

yes, but there are probably drawbacks with this performance-wise and reliability wise.

not that all of the servers need to be run in a single location or be owned by a single company, but there are some general advantages to the client/server model.



reading some stuff (an overview for the DIS protocol, ...), it seems that the "level of abstraction" is in some ways a bit higher (than game protocols I am familiar with), for example, it will indicate the "entity type" in the protocol, rather than, say, the name of, its 3D model.
Yes. The basic idea is that a local simulator - say a tank, or an airframe - maintains a local environment model (local image generation and position models maintained by dead reckoning) - what goes across the network are changes to it's velocity vector, and weapon fire events. The intent is to minimize the amount of data that has to be sent across the net, and to maintain speed of image generation by doing rendering locally.


now, why, exactly, would anyone consider doing rendering on the server?...

Well, render might be the wrong term here. Think more about map tiling. When you do map applications, the GIS server sends out map tiles. Similarly, at least some MMOs do most of the scene generation centrally. For that matter, think about moving around Google Earth in image mode - the data is still coming from Google servers.

The military simulators come from a legacy of flight simulators - VERY high resolution imagery, very fast movement. Before the simulation starts, terrain data and imagery are distributed in advance - every simulator has all the data needed to generate an out-the-window view, and to do terrain calculations (e.g., line-of-sight) locally.


ok, so sending polygons and images over the net.

so, by "very", is the implication that they are sending large numbers of 1024x1024 or 4096x4096 texture-maps/tiles or similar?...

typically, I do most texture art at 256x256 or 512x512.

but, anyways, presumably JPEG or similar could probably make it work.


ironically, all this leads to more MMOs using client-side physics, and more FPS games using server-side physics, with an MMO generally having a much bigger problem regarding cheating than an FPS.

For the military stuff, it all comes down to compute load and network bandwidth/latency considerations - you simply can't move enough data around, quickly enough to support high-res. out-the-window imagery for a pilot pulling a 2g turn. Hence you have to do all that locally. Cheating is less of an issue, since these are generally highly managed scenarios conducted as training exercises. What's more of an issue is if the software in one sim. draws different conclusions than the software in an other sim. (e.g., two planes in a dogfight, each concluding that it shot down the other one) - that's usually the result of a design bug rather than cheating (though Capt. Kirk's "I don't believe in the no win scenario" line comes to mind).


this is why most modern games use client/server.

some older games (such as Doom-based games) determined things like AI behaviors and damage on each player's computer, but this made reliability poor.

some newer games, such as Minecraft, have a lot of bugs related to doing things like this.


so, hence, why client/server is popular (in most games Quake-era and newer, for example: Quake-family engines, Source Engine, Unreal Engine, ...).

these need not be centralized servers though, as (especially in FPS games), the servers typically run on user PC's, typically with one of the players "hosting" the game and running as a server.

there are typically also "dedicated" servers (which don't have a client running), but these are a little less common.



There's been a LOT of work over the years, in the field of distributed simulation. It's ALL about scaling, and most of the issues have to do with time-critical, cpu-intensive calcuations.


possibly, but I meant in terms of the scalability of using load-balanced servers (divided by area) and server-to-server message passing.

Nope. Network latencies and bandwidth are the issue. Just a little bit of jigger in the timing and pilots tend to hurl all over the simulators. We're talking about repainting a high-res. display between 20 to 40 times per second - you've got to drive that locally.


well, yes, but this talks about the implication of remotely doing stuff like rendering and handling camera controls. it is fairly rarely done this way in actual games, because this doesn't work very well in general.


granted, FPS games (in deathmatch style gameplay) are likely a better example then of MMOs, where an FPS game is much more sensitive to latency (given people are jumping all over the place and firing weapons at each other and so on). hence, why the camera would typically be handled primarily on the client end.

however, clients don't communicate directly with each other, but with the server. the server then sends to everyone else its understanding of where everyone is currently at (and mostly the server sits there sending out world "delta" messages, which update the positions and similar of various entities).

sometimes, this can lead to issues, like say, someone takes cover but still gets shot, because as far as the server knew, they had not yet taken cover.

typically the server retains responsibility over weapons fire and damage though.


granted, the latency issue is still enough of an issue to where many people instead prefer "LAN parties", where there is a direct high-speed connection (typically Ethernet) between the client and the server systems.


it is a question regarding whether or not direct client-to-client communication over the internet could help, but ultimately this idea is largely rendered largely useless by the common usage of NAT (it would require all clients to either have public IP addresses or use port-forwarding).

it is bad enough requiring people to have to set up port forwarding for the computer to be used as the server.

it is not clear that client-to-client would lead to necessarily all that much better handling of latency either, for that matter.


or such...

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to