On Wed, Apr 18, 2007 at 10:26:34AM +0200, Karsten Otto wrote: > Most 3D games already have a network of waypoints in their world > maps, so computer controlled characters can easily navigate them. You > could use this for text-user navigation too, but this is usually too > fine grained (allowing for smooth walking around corners etc). > > On the other hand, most VR systems already support viewpoints, which > are on a higher semantic level, and thus seem a good place to attach > a textual description of what you can see (at least the static > "scenery" part). Unfortunately, viewpoints usually have no navigation > links between them. So for what you want to do, you need a > combination of both.
Yes, although I'll qualify this by saying that waypoint-based node networks have a number of drawbacks. On thinking about it a bit more, pathfinding meshes (where you run your pathfinding on the surface of a polygon mesh rather than a graph) are more powerful, and solves some of the problems you bring up belowe because they define areas rather than just points. > This requires some work, but VOS is flexible enough to support all this. Of course :-) I actually started working on a "VOS MUD" back in the s3 days... > > $ Go north > > > You need exit lables on the navigation edges for this. Also, each > node should have its own label, so the user can do things like > "travel to pyramid" without having to navigate any intermediary nodes > by itself (after all waypoints were made to allow A* path search and > navigation :-) Sure. I was just thinking the simplest case where you just iterate through the cardinal directions and describe what you see each direction. The idea here was that you'd generate the descriptions as dynamically as possible, as opposed to some MUDs where you have to enter the entire node description ahead of time, and as a result the description is pretty static. > Ok, you need some more information for a navigation node than just > the viewpoint itsef. You also need a bounding box/sphere/polyhedron > that defines its scope, i.e. which of the nearby dynamic entities > (other users, dropped items, etc.) to add to the description. Also, > you could then place entering text-users at random points within this > area, so they do not stand all on top of each other. So for this problem, having a navigation mesh representing the space solves this problem neatly. You define a set of polygons in the mesh as representing each "room" or "node", you can compute adjacency and line of sight pretty easily, and can drop the user anywhere in that set of polygons and still be considered in that particular area. > > Gonzo(3d) goes south to the entrace to the Pyramid. > > > In contrast, this is terribly complicated. Deriving the intention/ > activity of a user (thats what you have here) from its raw movements > can be very tricky and require a lot of computation. Tetron, Reed, I > don't know if you ever worked with the computer vision aspect in > robotics, if you did you know what I mean. It may be possible however > to simplify things a bit for this particular case, i.e. finding the > navigation link closest to the point where a 3d-user left the current > scope. Well, all I had in mind here was detecting the movement of a 3D user from one node area to another, which seems straightforward enough. Depending on how the space is partitioned, there should be a clear threshold the 3D user crosses from one area/room to another. Whether that area is defined by proximity to a specific point or being in a certain area of the navigation mesh is up to the representation. > > You see Gonzo(3d) by the entrace to the Pyramid. > > > > Gonzo(3d) waves to you. > > > If this works, you do not only see what happens inside your current > scope, but also what happens in nearby scopes. You either need some > AOI management for this to work, or extra grouping information on > each node, i.e. in the entrance node, you can see the hallway and > vice versa, but not what is going on in the security room on the > other side of the one-way mirror :-) Of course, you could again > separate navigation informatioon (waypoints) from area information > (viewpoints) again for this to work. The reason I put that in there is that I've typically found the "horizon" on MUDs to be very limiting. You are given very little awareness of what is going on around you except in the immediate node. (Sometimes you get "there is a rustling to the south!" even though the description said "south" is only five meters down a paved street.) Again I think with the proper spatial representation this kind of visibility information could be derived automatically based on sector adjacency and line-of-sight tests. As far as showing actions, that's a simple matter of mapping animation emotes (like /wave, when we get around to implementing that) to textual emotes. > A few other text-user commands that may be handy: > > follow <user> - move the text-user's avatar to wherever another > avatar (text or 3d!) is moving to. > > face <user> - turn the text-user's avatar to face another one. You > can also do this automatically if you detect a corresponding speech > pattern like "kao: are you there?" > > approach <user> - like "face", but also move the avatar close to the > target. > > ... and probably more. No need to implement these all at once, better > have a sort of plug-in system for the text/3d bridge. All good suggestions. For me this discussion is mostly idle speculation, because we're focused on other things, but it's a useful thought experiment in how semantic attribution of immersive 3D VOS spaces could work in practice. I'd be very happy if someone else wanted to pick up and run with this idea, though. -- [ Peter Amstutz ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ] [Lead Programmer][Interreality Project][Virtual Reality for the Internet] [ VOS: Next Generation Internet Communication][ http://interreality.org ] [ http://interreality.org/~tetron ][ pgpkey: pgpkeys.mit.edu 18C21DF7 ]
signature.asc
Description: Digital signature
_______________________________________________ vos-d mailing list [email protected] http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d
