On 4/4/2012 5:26 PM, Miles Fidelman wrote:
BGB wrote:
Not so sure. Probably similar levels of complexity between a
military sim. and, say, World of Warcraft. Fidelity to real-world
behavior is more important, and network latency matters for the
extreme real-time stuff (e.g., networked dogfights at Mach 2), but
other than that, IP networks, gaming class PCs at the endpoints,
serious graphics processors. Also more of a need for
interoperability - as there are lots of different simulations,
plugged together into lots of different exercises and training
scenarios - vs. a MMORPG controlled by a single company.
ok, so basically a heterogeneous MMO.
and distributed
well, yes, but I am not entirely sure how many non-distributed (single
server) MMO's there are in the first place.
presumably, the world has to be split between multiple servers to deal
with all of the users.
some older MMOs had "shards", where users on one server wouldn't be able
to see what users on a different server were doing, but this is AFAIK
generally not really considered acceptable in current MMOs (hence why
the world would be divided up into "areas" or "regions" instead,
presumably with some sort of load-balancing and similar).
unless of course, this is operating under a different assumption of what
a distributed-system is than one which allows a load-balanced
client/server architecture.
reading some stuff (an overview for the DIS protocol, ...), it seems
that the "level of abstraction" is in some ways a bit higher (than
game protocols I am familiar with), for example, it will indicate the
"entity type" in the protocol, rather than, say, the name of, its 3D
model.
Yes. The basic idea is that a local simulator - say a tank, or an
airframe - maintains a local environment model (local image generation
and position models maintained by dead reckoning) - what goes across
the network are changes to it's velocity vector, and weapon fire
events. The intent is to minimize the amount of data that has to be
sent across the net, and to maintain speed of image generation by
doing rendering locally.
now, why, exactly, would anyone consider doing rendering on the server?...
presumably, the server would serve mostly as a sort of message relay
(bouncing messages from one client to any nearby clients), and
potentially also handling physics (typically split between the client
and server in FPS games, where the main physics is done on the server,
such as to help prevent cheating and similar, as well as the server
running any monster/NPC AI).
although less expensive for the server, client-side physics has the
drawback of making it harder to prevent hacks (such as moving really
fast and/or teleporting), typically instead requiring the use of
detection and banning strategies.
ironically, all this leads to more MMOs using client-side physics, and
more FPS games using server-side physics, with an MMO generally having a
much bigger problem regarding cheating than an FPS.
typically (in an FPS or similar), rendering is purely client-side, and
usually most network events are extrapolated (based on origin and
velocity and similar), to compensate for timing between the client and
server (and the results of network ping-time and similar).
it is desirable for players and enemies to be in about the right spot,
even with maybe 250-750 ms or more between the client and server (though
many 3D engines will kick players if the ping time is more than 2000 or
3000 ms).
in my own 3D engine, it is partially split, currently with player
movement physics being split between the client and server, and most
other physics being server-side.
there is currently no physics involved in the entity extrapolation,
although doing more work here could be helpful (mostly to avoid
extrapolation occasionally putting things into walls or similar).
sadly, even single-player, it can still be a little bit of an issue
dealing with the matter of the client and server updating at different
frequencies (say, the "server" runs internally at 10Hz, and the "client"
runs at 30Hz - 60Hz), so extrapolating the position is still necessary
(camera movements at 10Hz are not exactly pleasant).
so, this leaves allowing the client-side camera to partly move
independently of the "player" as known on the server, and using
interpolation trickery to reconcile the client and server versions of
the player's position, and occasionally using flags so deal with things
like teleporters and similar (the player will be teleported on the
server, which will send a flag to be like "you are here and looking this
direction").
but, I meant "model" in this case more in the sense of the server sends
a message more like, say:
(delta 492
(classname "npc_plane_fa18")
(org 6714 4932 5184)
(ang ...)
(vel ...)
...)
rather than, say, something like:
(delta 492
(model "model/plane/fa18/fa18.lwo")
(org 6714 4932 5184)
(ang ...)
(vel ...)
...)
though, it may not be a big deal if the engine sends both (a classname
as well as the name of the 3D model used to render it, although it may
be a bigger difference if not all clients use the same 3D model in the
same VFS location or similar...).
nothing obvious comes to mind for why it wouldn't scale, would
probably just split the world across multiple servers (by area) and
have the clients hop between servers as needed (with some
server-to-server communication).
There's been a LOT of work over the years, in the field of distributed
simulation. It's ALL about scaling, and most of the issues have to do
with time-critical, cpu-intensive calcuations.
possibly, but I meant in terms of the scalability of using load-balanced
servers (divided by area) and server-to-server message passing.
the main problem area I would think would be if too many people or
entities were in a small area, forcing all of them to be dealt with by a
single server.
presumably, also, people are using BSP trees and similar (for the scene
physics, ...).
as far as I know all this is pretty much standard practice though...
but, oh well, I recently had a bit of "fun" tracking down a bug which
recently turned out to be due to timing issues involving the
"accumulation timers" being used in several different threads (in this
case, it was related more to video recording/encoding though, with the
3D renderer and video encoder running in separate threads due to
performance reasons, namely that of encoding video frames being fairly
costly and otherwise doing some fairly severe damage to the
framerate...). the video codec in question here being MJPEG (at 800x600
x 15Hz).
note that no locks are currently in use here.
or such...
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc