Am 17.04.2007 um 19:54 schrieb Peter Amstutz:

> [...]
> Also something I've wanted to explore is the possibility of creating a
> semantic representation of a space that is meaningful enough to be
> navigated from both a MUD-style text interface, while still being able
> to enter it as a fully 3D immersive world.
>
Interesting idea, and somewhat close to my current field of work.
Allow me to pitch in my 2 cents...

> You could lay down a node
> network of positions where the text-user could go, add descriptions to
> all the objects in the space, and then at each node it would print out
> the descriptions for things that were nearby.
>
Most 3D games already have a network of waypoints  in their world  
maps, so computer controlled characters can easily navigate them. You  
could use this for text-user navigation too, but this is usually too  
fine grained (allowing for smooth walking around corners etc).

On the other hand, most VR systems already support viewpoints, which  
are on a higher semantic level, and thus seem a good place to attach  
a textual description of what you can see (at least the static  
"scenery" part). Unfortunately, viewpoints usually have no navigation  
links between them. So for what you want to do, you need a  
combination of both.

This requires some work, but VOS is flexible enough to support all this.

>   For example (thinking of the current demo "black sun" world:
>
> ---
> "You are standing on a hill of brown dirt.  To the north is a large
> white pyramid with an entraceway.  To the east the hill continues.  To
> the west the hill continues.  To the south is black nothingness."
>
> $ Go north
>
You need exit lables on the navigation edges for this. Also, each  
node should have its own label, so the user can do things like  
"travel to pyramid" without having to navigate any intermediary nodes  
by itself (after all waypoints were made to allow A* path search and  
navigation :-)

> "You are at the entrace to the Pyramid.  To the north is a hallway  
> with
> a tiled floor.  At the end of the hallway are ramps leading up and to
> the east and west.  To west is a doorway."
>
> $ Go north
>
> "You are at the end of the hallway.  To the south is the entrance  
> to the
> Pyramid.  To the west is a doorway.  Up and to the east is a ramp.  Up
> and to the west is a ramp.
> Gonzo is here.
>
Ok, you need some more information for a navigation node than just  
the viewpoint itsef. You also need a bounding box/sphere/polyhedron  
that defines its scope, i.e. which of the nearby dynamic entities  
(other users, dropped items, etc.) to add to the description. Also,  
you could then place entering text-users at random points within this  
area, so they do not stand all on top of each other.

> Gonzo(3d) says Hello!
>
Now this is straight forward. The current IRC bridge can do this  
already.

> Gonzo(3d) goes south to the entrace to the Pyramid.
>
In contrast, this is terribly complicated. Deriving the intention/ 
activity of a user (thats what you have here) from its raw movements  
can be very tricky and require a lot of computation. Tetron, Reed, I  
don't know if you ever worked with the computer vision aspect in  
robotics, if you did you know what I mean. It may be possible however  
to simplify things a bit for this particular case, i.e. finding the  
navigation link closest to the point where a 3d-user left the current  
scope.

> You see Gonzo(3d) by the entrace to the Pyramid.
>
> Gonzo(3d) waves to you.
>
If this works, you do not only see what happens inside your current  
scope, but also what happens in nearby scopes. You either need some  
AOI management for this to work, or extra grouping information on  
each node, i.e. in the entrance node, you can see the hallway and  
vice versa, but not what is going on in the security room on the  
other side of the one-way mirror :-) Of course, you could again  
separate navigation informatioon (waypoints) from area information  
(viewpoints) again for this to work.

> $
> ---
>
> And so forth...
>
A few other text-user commands that may be handy:

follow <user> - move the text-user's avatar to wherever another  
avatar (text or 3d!) is moving to.

face <user> - turn the text-user's avatar to face another one. You  
can also do this automatically if you detect a corresponding speech  
pattern like "kao: are you there?"

approach <user> - like "face", but also move the avatar close to the  
target.

... and probably more. No need to implement these all at once, better  
have a sort of plug-in system for the text/3d bridge.


Regards,
Karsten Otto




_______________________________________________
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to