On Wednesday 18 April 2007 16:22, Peter Amstutz wrote:
> On Wed, Apr 18, 2007 at 10:26:34AM +0200, Karsten Otto wrote:
> > Most 3D games already have a network of waypoints  in their world
> > maps, so computer controlled characters can easily navigate them. You
> > could use this for text-user navigation too, but this is usually too
> > fine grained (allowing for smooth walking around corners etc).
> >
> > On the other hand, most VR systems already support viewpoints, which
> > are on a higher semantic level, and thus seem a good place to attach
> > a textual description of what you can see (at least the static
> > "scenery" part). Unfortunately, viewpoints usually have no navigation
> > links between them. So for what you want to do, you need a
> > combination of both.
>
> Yes, although I'll qualify this by saying that waypoint-based node
> networks have a number of drawbacks.  On thinking about it a bit more,
> pathfinding meshes (where you run your pathfinding on the surface of a
> polygon mesh rather than a graph) are more powerful, and solves some of
> the problems you bring up belowe because they define areas rather than
> just points.

Agreed.

> > This requires some work, but VOS is flexible enough to support all this.
>
> Of course :-)  I actually started working on a "VOS MUD" back in the s3
> days...

Is any of that code still usable?

> > > You see Gonzo(3d) by the entrace to the Pyramid.
> > >
> > > Gonzo(3d) waves to you.
> >
> > If this works, you do not only see what happens inside your current
> > scope, but also what happens in nearby scopes. You either need some
> > AOI management for this to work, or extra grouping information on
> > each node, i.e. in the entrance node, you can see the hallway and
> > vice versa, but not what is going on in the security room on the
> > other side of the one-way mirror :-) Of course, you could again
> > separate navigation informatioon (waypoints) from area information
> > (viewpoints) again for this to work.
>
> The reason I put that in there is that I've typically found the
> "horizon" on MUDs to be very limiting.  You are given very little
> awareness of what is going on around you except in the immediate node.
> (Sometimes you get "there is a rustling to the south!" even though the
> description said "south" is only five meters down a paved street.)
> Again I think with the proper spatial representation this kind of
> visibility information could be derived automatically based on sector
> adjacency and line-of-sight tests.

Well, you also have "shouts", for instance, that are designed to reach a 
certain area around you (or the entire MUD, depending on the implementation).

> > A few other text-user commands that may be handy:
> >
> > follow <user> - move the text-user's avatar to wherever another
> > avatar (text or 3d!) is moving to.
> >
> > face <user> - turn the text-user's avatar to face another one. You
> > can also do this automatically if you detect a corresponding speech
> > pattern like "kao: are you there?"
> >
> > approach <user> - like "face", but also move the avatar close to the
> > target.
> >
> > ... and probably more. No need to implement these all at once, better
> > have a sort of plug-in system for the text/3d bridge.
>
> All good suggestions.  For me this discussion is mostly idle
> speculation, because we're focused on other things, but it's a useful
> thought experiment in how semantic attribution of immersive 3D VOS
> spaces could work in practice.  I'd be very happy if someone else wanted
> to pick up and run with this idea, though.

I'll see what I can do in that matter... Don't expect much activity, tho.

-- 
Marcos Marado
Sonaecom IT

_______________________________________________
vos-d mailing list
[email protected]
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to