Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Ken Taylor
Yes! Accessibility in 3d virtual worlds would be *huge*. As far as I know,
no one has done anything like this yet... (and if they have, I would really
like to check it out).

-Ken


>
>
> This would be great for people who primarily want to just chat or be
> present in the world while doing other work, so they don't want the full
> 3D world.
>
> It would also make it possible for blind people to interact in the 3D
> world.
>
> Reed
>
>
>
> On Tue, Apr 17, 2007 at 01:54:23PM -0400, Peter Amstutz wrote:
> > Also something I've wanted to explore is the possibility of creating a
> > semantic representation of a space that is meaningful enough to be
> > navigated from both a MUD-style text interface, while still being able
> > to enter it as a fully 3D immersive world.  You could lay down a node
> > network of positions where the text-user could go, add descriptions to
> > all the objects in the space, and then at each node it would print out
> > the descriptions for things that were nearby.  For example (thinking of
> > the current demo "black sun" world:
> >
> > ---
> > "You are standing on a hill of brown dirt.  To the north is a large
> > white pyramid with an entraceway.  To the east the hill continues.  To
> > the west the hill continues.  To the south is black nothingness."
> >
> > $ Go north
> >
> > "You are at the entrace to the Pyramid.  To the north is a hallway with
> > a tiled floor.  At the end of the hallway are ramps leading up and to
> > the east and west.  To west is a doorway."
>
> ___
> vos-d mailing list
> vos-d@interreality.org
> http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d
>


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Marcos Marado
On Wednesday 18 April 2007 16:43, Reed Hedges wrote:
> This would be great for people who primarily want to just chat or be
> present in the world while doing other work, so they don't want the full
> 3D world.
>
> It would also make it possible for blind people to interact in the 3D
> world.

Three great motivations I see for this are:

* the possibility of giving a virtual world for blind people - I know many 
blind people using talkers (no monster killing variants of MUDs) with the 
help of software like Jaws because for them it is the easiest way of 
communication. The problem here is that most people nowadays want more 
attractive clients;

* Ubiquity part I: connect everywhere - How many work collegues you know that 
have their second life client open while working? And how many of them have 
their IM client? I'm allways-present in one virtual world... text-based. I 
wouldn't be online there in work-time if it was a 3D environment;

* Ubiquity part II: connect everything - On the subway, on the bus, on 
weekends and vacations, where I don't have a persistent internet connection, 
I'm used to connect myself via GPRS, either with a computer or a mobile 
device (cellphone in my particular case). I wouldn't connect to a 3D world 
via cellphone (hw limitations) or even over a GPRS connection (too slow). But 
with my SSH client I connect to text-based virtual worlds.

See, text worlds have advantages and disavantages over 3D worlds. What we 
don't have yet (at least that I know of) is a VW where you can connect both 
via a text-based interface and via a 3D application.

-- 
Marcos Marado
Sonaecom IT

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Reed Hedges
On Wed, Apr 18, 2007 at 11:16:43AM +0100, Marcos Marado wrote:
> I'm new to VOS, so I guess I can't yet be really useful here, but I'll try to 
> help as I can...


A key concept in VOS is linked objects forming data structures, and
figuring out the best way to desigign and formalize those structures for
your application.  Check out the "Creating Interreality" manual for
description and if you have any questions, let us know.

Reed

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Reed Hedges



> "scenery" part). Unfortunately, viewpoints usually have no navigation  
> links between them. So for what you want to do, you need a  
> combination of both.
> 
> This requires some work, but VOS is flexible enough to support all this.


Yeah, you would just have the "waypoint" object type have child links to
other waypoints. You would then combine it with the "viewpoint" type
which provides the 3D position, orientation, and other spatial info. Or
those two aspects could just be contained in one  object type.

Having a set of "exits" from a viewpoint might be useful in 3D as well,
as a way to help people navigate 3D spaces.



> You need exit lables on the navigation edges for this. 

This is what the contextual names on child links are for :)


> > Gonzo(3d) goes south to the entrace to the Pyramid.
> >
> In contrast, this is terribly complicated. Deriving the intention/ 
> activity of a user (thats what you have here) from its raw movements  


We can explicitly represent it by giving the "waypoint" object type a
child that's a container for avatars who are "at" that waypoint.


> A few other text-user commands that may be handy:
> 
> follow  - move the text-user's avatar to wherever another  
> avatar (text or 3d!) is moving to.
> 
> face  - turn the text-user's avatar to face another one. You  
> can also do this automatically if you detect a corresponding speech  
> pattern like "kao: are you there?"
> 
> approach  - like "face", but also move the avatar close to the  
> target.


These behaviors would also be useful in 3D too.


Reed


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Reed Hedges


This would be great for people who primarily want to just chat or be
present in the world while doing other work, so they don't want the full
3D world.

It would also make it possible for blind people to interact in the 3D
world.

Reed



On Tue, Apr 17, 2007 at 01:54:23PM -0400, Peter Amstutz wrote:
> Also something I've wanted to explore is the possibility of creating a 
> semantic representation of a space that is meaningful enough to be 
> navigated from both a MUD-style text interface, while still being able 
> to enter it as a fully 3D immersive world.  You could lay down a node 
> network of positions where the text-user could go, add descriptions to 
> all the objects in the space, and then at each node it would print out 
> the descriptions for things that were nearby.  For example (thinking of 
> the current demo "black sun" world:
> 
> ---
> "You are standing on a hill of brown dirt.  To the north is a large 
> white pyramid with an entraceway.  To the east the hill continues.  To 
> the west the hill continues.  To the south is black nothingness."
> 
> $ Go north
> 
> "You are at the entrace to the Pyramid.  To the north is a hallway with 
> a tiled floor.  At the end of the hallway are ramps leading up and to 
> the east and west.  To west is a doorway."

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Marcos Marado
On Wednesday 18 April 2007 16:22, Peter Amstutz wrote:
> On Wed, Apr 18, 2007 at 10:26:34AM +0200, Karsten Otto wrote:
> > Most 3D games already have a network of waypoints  in their world
> > maps, so computer controlled characters can easily navigate them. You
> > could use this for text-user navigation too, but this is usually too
> > fine grained (allowing for smooth walking around corners etc).
> >
> > On the other hand, most VR systems already support viewpoints, which
> > are on a higher semantic level, and thus seem a good place to attach
> > a textual description of what you can see (at least the static
> > "scenery" part). Unfortunately, viewpoints usually have no navigation
> > links between them. So for what you want to do, you need a
> > combination of both.
>
> Yes, although I'll qualify this by saying that waypoint-based node
> networks have a number of drawbacks.  On thinking about it a bit more,
> pathfinding meshes (where you run your pathfinding on the surface of a
> polygon mesh rather than a graph) are more powerful, and solves some of
> the problems you bring up belowe because they define areas rather than
> just points.

Agreed.

> > This requires some work, but VOS is flexible enough to support all this.
>
> Of course :-)  I actually started working on a "VOS MUD" back in the s3
> days...

Is any of that code still usable?

> > > You see Gonzo(3d) by the entrace to the Pyramid.
> > >
> > > Gonzo(3d) waves to you.
> >
> > If this works, you do not only see what happens inside your current
> > scope, but also what happens in nearby scopes. You either need some
> > AOI management for this to work, or extra grouping information on
> > each node, i.e. in the entrance node, you can see the hallway and
> > vice versa, but not what is going on in the security room on the
> > other side of the one-way mirror :-) Of course, you could again
> > separate navigation informatioon (waypoints) from area information
> > (viewpoints) again for this to work.
>
> The reason I put that in there is that I've typically found the
> "horizon" on MUDs to be very limiting.  You are given very little
> awareness of what is going on around you except in the immediate node.
> (Sometimes you get "there is a rustling to the south!" even though the
> description said "south" is only five meters down a paved street.)
> Again I think with the proper spatial representation this kind of
> visibility information could be derived automatically based on sector
> adjacency and line-of-sight tests.

Well, you also have "shouts", for instance, that are designed to reach a 
certain area around you (or the entire MUD, depending on the implementation).

> > A few other text-user commands that may be handy:
> >
> > follow  - move the text-user's avatar to wherever another
> > avatar (text or 3d!) is moving to.
> >
> > face  - turn the text-user's avatar to face another one. You
> > can also do this automatically if you detect a corresponding speech
> > pattern like "kao: are you there?"
> >
> > approach  - like "face", but also move the avatar close to the
> > target.
> >
> > ... and probably more. No need to implement these all at once, better
> > have a sort of plug-in system for the text/3d bridge.
>
> All good suggestions.  For me this discussion is mostly idle
> speculation, because we're focused on other things, but it's a useful
> thought experiment in how semantic attribution of immersive 3D VOS
> spaces could work in practice.  I'd be very happy if someone else wanted
> to pick up and run with this idea, though.

I'll see what I can do in that matter... Don't expect much activity, tho.

-- 
Marcos Marado
Sonaecom IT

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Peter Amstutz
On Wed, Apr 18, 2007 at 10:26:34AM +0200, Karsten Otto wrote:

> Most 3D games already have a network of waypoints  in their world  
> maps, so computer controlled characters can easily navigate them. You  
> could use this for text-user navigation too, but this is usually too  
> fine grained (allowing for smooth walking around corners etc).
>
> On the other hand, most VR systems already support viewpoints, which  
> are on a higher semantic level, and thus seem a good place to attach  
> a textual description of what you can see (at least the static  
> "scenery" part). Unfortunately, viewpoints usually have no navigation  
> links between them. So for what you want to do, you need a  
> combination of both.

Yes, although I'll qualify this by saying that waypoint-based node 
networks have a number of drawbacks.  On thinking about it a bit more, 
pathfinding meshes (where you run your pathfinding on the surface of a 
polygon mesh rather than a graph) are more powerful, and solves some of 
the problems you bring up belowe because they define areas rather than 
just points.

> This requires some work, but VOS is flexible enough to support all this.

Of course :-)  I actually started working on a "VOS MUD" back in the s3 
days...

> > $ Go north
> >
> You need exit lables on the navigation edges for this. Also, each  
> node should have its own label, so the user can do things like  
> "travel to pyramid" without having to navigate any intermediary nodes  
> by itself (after all waypoints were made to allow A* path search and  
> navigation :-)

Sure.  I was just thinking the simplest case where you just iterate 
through the cardinal directions and describe what you see each 
direction.  The idea here was that you'd generate the descriptions as 
dynamically as possible, as opposed to some MUDs where you have to enter 
the entire node description ahead of time, and as a result the 
description is pretty static.

> Ok, you need some more information for a navigation node than just  
> the viewpoint itsef. You also need a bounding box/sphere/polyhedron  
> that defines its scope, i.e. which of the nearby dynamic entities  
> (other users, dropped items, etc.) to add to the description. Also,  
> you could then place entering text-users at random points within this  
> area, so they do not stand all on top of each other.

So for this problem, having a navigation mesh representing the space 
solves this problem neatly.  You define a set of polygons in the mesh as 
representing each "room" or "node", you can compute adjacency and line 
of sight pretty easily, and can drop the user anywhere in that set of 
polygons and still be considered in that particular area.

> > Gonzo(3d) goes south to the entrace to the Pyramid.
> >
> In contrast, this is terribly complicated. Deriving the intention/ 
> activity of a user (thats what you have here) from its raw movements  
> can be very tricky and require a lot of computation. Tetron, Reed, I  
> don't know if you ever worked with the computer vision aspect in  
> robotics, if you did you know what I mean. It may be possible however  
> to simplify things a bit for this particular case, i.e. finding the  
> navigation link closest to the point where a 3d-user left the current  
> scope.

Well, all I had in mind here was detecting the movement of a 3D user 
from one node area to another, which seems straightforward enough.  
Depending on how the space is partitioned, there should be a clear 
threshold the 3D user crosses from one area/room to another.  Whether 
that area is defined by proximity to a specific point or being in a 
certain area of the navigation mesh is up to the representation.

> > You see Gonzo(3d) by the entrace to the Pyramid.
> >
> > Gonzo(3d) waves to you.
> >
> If this works, you do not only see what happens inside your current  
> scope, but also what happens in nearby scopes. You either need some  
> AOI management for this to work, or extra grouping information on  
> each node, i.e. in the entrance node, you can see the hallway and  
> vice versa, but not what is going on in the security room on the  
> other side of the one-way mirror :-) Of course, you could again  
> separate navigation informatioon (waypoints) from area information  
> (viewpoints) again for this to work.

The reason I put that in there is that I've typically found the 
"horizon" on MUDs to be very limiting.  You are given very little 
awareness of what is going on around you except in the immediate node.  
(Sometimes you get "there is a rustling to the south!" even though the 
description said "south" is only five meters down a paved street.)  
Again I think with the proper spatial representation this kind of 
visibility information could be derived automatically based on sector 
adjacency and line-of-sight tests.

As far as showing actions, that's a simple matter of mapping animation 
emotes (like /wave, when we get around to implementing that) to textual 
emotes.

> A few oth

Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Peter Amstutz
To be honest this is not something we have time to work on, but if you'd 
like to pick it up we'll be happy to help you get up to speed.

You might also want to look at the "mesh" utility in the VOS 
distribution, which is an interactive command-line shell that presents a 
textual interface to the underlying VOS data structures using a unix 
file system metaphor.  It's not semantic (as we're discussing here) but 
it may be an interesting starting point, and is effectively the current 
"textual portal" to VOS.  It's quite powerful (much more powerful than 
the 3D GUI, at the moment).

On Wed, Apr 18, 2007 at 11:16:43AM +0100, Marcos Marado wrote:
> On Wednesday 18 April 2007 09:26, Karsten Otto wrote:
> > Am 17.04.2007 um 19:54 schrieb Peter Amstutz:
> > > [...]
> > > Also something I've wanted to explore is the possibility of creating a
> > > semantic representation of a space that is meaningful enough to be
> > > navigated from both a MUD-style text interface, while still being able
> > > to enter it as a fully 3D immersive world.
> >
> > Interesting idea, and somewhat close to my current field of work.
> 
> Great, I was looking for something like that for years now :-) 
> 
> I think that this is something really interesting, and that should not 
> only be discussed, but implemented. It would be nice that, by default, 
> a VOS instalation came with the choice of having both graphical and 
> textual "entrances" to it. Furthermore, it would be nice if the 
> textual "portal" was MUD-like, maybe with a telnet and a telnet-ssl 
> port opened for connections.
> 
> I'm new to VOS, so I guess I can't yet be really useful here, but I'll 
> try to help as I can...
> 
> Best regards,
> -- 
> Marcos Marado
> Sonaecom IT
> 
> ___
> vos-d mailing list
> vos-d@interreality.org
> http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Marcos Marado
On Wednesday 18 April 2007 09:26, Karsten Otto wrote:
> Am 17.04.2007 um 19:54 schrieb Peter Amstutz:
> > [...]
> > Also something I've wanted to explore is the possibility of creating a
> > semantic representation of a space that is meaningful enough to be
> > navigated from both a MUD-style text interface, while still being able
> > to enter it as a fully 3D immersive world.
>
> Interesting idea, and somewhat close to my current field of work.

Great, I was looking for something like that for years now :-) 

I think that this is something really interesting, and that should not only be 
discussed, but implemented. It would be nice that, by default, a VOS 
instalation came with the choice of having both graphical and 
textual "entrances" to it. Furthermore, it would be nice if the 
textual "portal" was MUD-like, maybe with a telnet and a telnet-ssl port 
opened for connections.

I'm new to VOS, so I guess I can't yet be really useful here, but I'll try to 
help as I can...

Best regards,
-- 
Marcos Marado
Sonaecom IT

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Karsten Otto
Am 17.04.2007 um 19:54 schrieb Peter Amstutz:

> [...]
> Also something I've wanted to explore is the possibility of creating a
> semantic representation of a space that is meaningful enough to be
> navigated from both a MUD-style text interface, while still being able
> to enter it as a fully 3D immersive world.
>
Interesting idea, and somewhat close to my current field of work.
Allow me to pitch in my 2 cents...

> You could lay down a node
> network of positions where the text-user could go, add descriptions to
> all the objects in the space, and then at each node it would print out
> the descriptions for things that were nearby.
>
Most 3D games already have a network of waypoints  in their world  
maps, so computer controlled characters can easily navigate them. You  
could use this for text-user navigation too, but this is usually too  
fine grained (allowing for smooth walking around corners etc).

On the other hand, most VR systems already support viewpoints, which  
are on a higher semantic level, and thus seem a good place to attach  
a textual description of what you can see (at least the static  
"scenery" part). Unfortunately, viewpoints usually have no navigation  
links between them. So for what you want to do, you need a  
combination of both.

This requires some work, but VOS is flexible enough to support all this.

>   For example (thinking of the current demo "black sun" world:
>
> ---
> "You are standing on a hill of brown dirt.  To the north is a large
> white pyramid with an entraceway.  To the east the hill continues.  To
> the west the hill continues.  To the south is black nothingness."
>
> $ Go north
>
You need exit lables on the navigation edges for this. Also, each  
node should have its own label, so the user can do things like  
"travel to pyramid" without having to navigate any intermediary nodes  
by itself (after all waypoints were made to allow A* path search and  
navigation :-)

> "You are at the entrace to the Pyramid.  To the north is a hallway  
> with
> a tiled floor.  At the end of the hallway are ramps leading up and to
> the east and west.  To west is a doorway."
>
> $ Go north
>
> "You are at the end of the hallway.  To the south is the entrance  
> to the
> Pyramid.  To the west is a doorway.  Up and to the east is a ramp.  Up
> and to the west is a ramp.
> Gonzo is here.
>
Ok, you need some more information for a navigation node than just  
the viewpoint itsef. You also need a bounding box/sphere/polyhedron  
that defines its scope, i.e. which of the nearby dynamic entities  
(other users, dropped items, etc.) to add to the description. Also,  
you could then place entering text-users at random points within this  
area, so they do not stand all on top of each other.

> Gonzo(3d) says Hello!
>
Now this is straight forward. The current IRC bridge can do this  
already.

> Gonzo(3d) goes south to the entrace to the Pyramid.
>
In contrast, this is terribly complicated. Deriving the intention/ 
activity of a user (thats what you have here) from its raw movements  
can be very tricky and require a lot of computation. Tetron, Reed, I  
don't know if you ever worked with the computer vision aspect in  
robotics, if you did you know what I mean. It may be possible however  
to simplify things a bit for this particular case, i.e. finding the  
navigation link closest to the point where a 3d-user left the current  
scope.

> You see Gonzo(3d) by the entrace to the Pyramid.
>
> Gonzo(3d) waves to you.
>
If this works, you do not only see what happens inside your current  
scope, but also what happens in nearby scopes. You either need some  
AOI management for this to work, or extra grouping information on  
each node, i.e. in the entrance node, you can see the hallway and  
vice versa, but not what is going on in the security room on the  
other side of the one-way mirror :-) Of course, you could again  
separate navigation informatioon (waypoints) from area information  
(viewpoints) again for this to work.

> $
> ---
>
> And so forth...
>
A few other text-user commands that may be handy:

follow  - move the text-user's avatar to wherever another  
avatar (text or 3d!) is moving to.

face  - turn the text-user's avatar to face another one. You  
can also do this automatically if you detect a corresponding speech  
pattern like "kao: are you there?"

approach  - like "face", but also move the avatar close to the  
target.

... and probably more. No need to implement these all at once, better  
have a sort of plug-in system for the text/3d bridge.


Regards,
Karsten Otto




___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-17 Thread Peter Amstutz
Yes, the IRC plugin is included as part of the distribution by default.  
It creates a VOS user for each IRC user in the virtual world, and 
creates a separate IRC client session for each user in the VOS world, so 
it is completely transparent.

Currently IRC users in VOS arn't given any form, but there's no reason 
why they couldn't be given avatars and placed somewhere on the map (I 
someone suggested having them wander around aimlessly, like the 
townspeople in certain console RPGs :-)

Also something I've wanted to explore is the possibility of creating a 
semantic representation of a space that is meaningful enough to be 
navigated from both a MUD-style text interface, while still being able 
to enter it as a fully 3D immersive world.  You could lay down a node 
network of positions where the text-user could go, add descriptions to 
all the objects in the space, and then at each node it would print out 
the descriptions for things that were nearby.  For example (thinking of 
the current demo "black sun" world:

---
"You are standing on a hill of brown dirt.  To the north is a large 
white pyramid with an entraceway.  To the east the hill continues.  To 
the west the hill continues.  To the south is black nothingness."

$ Go north

"You are at the entrace to the Pyramid.  To the north is a hallway with 
a tiled floor.  At the end of the hallway are ramps leading up and to 
the east and west.  To west is a doorway."

$ Go north

"You are at the end of the hallway.  To the south is the entrance to the 
Pyramid.  To the west is a doorway.  Up and to the east is a ramp.  Up 
and to the west is a ramp.
Gonzo is here.

Gonzo(3d) says Hello!

Gonzo(3d) goes south to the entrace to the Pyramid.

You see Gonzo(3d) by the entrace to the Pyramid.

Gonzo(3d) waves to you.

$ 
---

And so forth...

On Tue, Apr 17, 2007 at 04:54:36PM +0100, Marcos Marado wrote:
> 
> Hi there, 
> I said this on your IRC channel...
> -
> 
> * Now talking on #vos
> * Topic for #vos is: Virtual Object System :: http:://interreality.org :: 
> Free 
> Multi-user Virtual Reality
> * Topic for #vos set by tetron|mac at Tue Dec 12 04:28:20 2006
> * #vos :[freenode-info] why register and identify? your IRC nick is how 
> people 
> know you. http://freenode.net/faq.shtml#nicksetup
>  hi there
> * You are now known as Mind_Booster
>  I saw your website...
>  I wonder, does the VOS server already comes with this "plugin" 
> to interconnect a world with IRC?
>  I'm a talker owner (for those now knowing, a talker is a 
> text-based virtual world) and it would be great to be able to connect it to a 
> graphical world like VOS
>  keeping both interfaces, the "via telnet text-based world" and 
> the vos-browser...
>  hmm, I guess you're all asleep :-P
>  I guess I'll email this question to the mailing list :-)
>  thanks anyway, and keep up with the effort :-)
> 
> -
> Can anyone please give me an answer? I would really be interested in this.
> 
> -- 
> Marcos Marado
> Sonaecom IT
> 
> ___
> vos-d mailing list
> vos-d@interreality.org
> http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


[vos-d] Integration of VOS and IRC

2007-04-17 Thread Marcos Marado

Hi there, 
I said this on your IRC channel...
-

* Now talking on #vos
* Topic for #vos is: Virtual Object System :: http:://interreality.org :: Free 
Multi-user Virtual Reality
* Topic for #vos set by tetron|mac at Tue Dec 12 04:28:20 2006
* #vos :[freenode-info] why register and identify? your IRC nick is how people 
know you. http://freenode.net/faq.shtml#nicksetup
 hi there
* You are now known as Mind_Booster
 I saw your website...
 I wonder, does the VOS server already comes with this "plugin" 
to interconnect a world with IRC?
 I'm a talker owner (for those now knowing, a talker is a 
text-based virtual world) and it would be great to be able to connect it to a 
graphical world like VOS
 keeping both interfaces, the "via telnet text-based world" and 
the vos-browser...
 hmm, I guess you're all asleep :-P
 I guess I'll email this question to the mailing list :-)
 thanks anyway, and keep up with the effort :-)

-
Can anyone please give me an answer? I would really be interested in this.

-- 
Marcos Marado
Sonaecom IT

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d