Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Marcos Marado
On Wednesday 18 April 2007 09:26, Karsten Otto wrote:
 Am 17.04.2007 um 19:54 schrieb Peter Amstutz:
  [...]
  Also something I've wanted to explore is the possibility of creating a
  semantic representation of a space that is meaningful enough to be
  navigated from both a MUD-style text interface, while still being able
  to enter it as a fully 3D immersive world.

 Interesting idea, and somewhat close to my current field of work.

Great, I was looking for something like that for years now :-) 

I think that this is something really interesting, and that should not only be 
discussed, but implemented. It would be nice that, by default, a VOS 
instalation came with the choice of having both graphical and 
textual entrances to it. Furthermore, it would be nice if the 
textual portal was MUD-like, maybe with a telnet and a telnet-ssl port 
opened for connections.

I'm new to VOS, so I guess I can't yet be really useful here, but I'll try to 
help as I can...

Best regards,
-- 
Marcos Marado
Sonaecom IT

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


[vos-d] server requirements

2007-04-18 Thread Reed Hedges


In a different forum, Hellekin Wolf asked:

 The server issue mentioned above makes me think of the requirements for
 running a VOS world. Do I need hardware graphics or is it only for the
 client side? 

A server does not need any graphics hardware.  Many servers in fact are
just providing data so you don't even need a fast processor, though
things tend to run is many threads so the more processors the better.  The
current design does use a lot of memory if you have lots of objects, though 
changing that is a big priority for the next big version (S5) -- a really 
good threading model is also of course planned as Pete has been telling us 
about.

For reference, interreality.org is an AMD 64 X2 dual core, 2GB RAM and it runs
with less than 1% load; we anticipate being able to run several (5-10) small 
worlds
on it as well as the website.  It replaced a 300 Mhz Pentium II with less than 
512
MB memory!! It ran pretty well, though memory was tight. I have a 1.5
GHz AMD at home and it also runs fine.

In the future we may have server modules that do dynamics and physics
simulation for everyone, that will require a fast processor, or possibly
a coprocessor.  Also, servers that need to do more computation will
obviously require faster processors. 

The client requires some kind of 3D graphics hardware to run well.
Crystal Space does have a software-only renderer though it is not very
actively maintained. Supporting that software-only renderer is something
that we'd like to do in Terangreal (the client) eventually.

Of course, it's also technically possible to create Vobjects within 
Terangreal (by modifying terangreal's code) that can be a server to 
other clients, due to the flexibility of VOS :) (You can also just
create objects in the mesh tool, this is a good way of experimenting
with stuff.)

The server framework is called omnivos.  If you look at (vos/apps/omnivos)
you'll see that it's like 15 lines of code :)  It mainly just loads
plugins.  Run it with one of the example configuration files like this:
omnivos -c example.xod -nofork -o -.

Reed


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Marcos Marado
On Wednesday 18 April 2007 16:22, Peter Amstutz wrote:
 On Wed, Apr 18, 2007 at 10:26:34AM +0200, Karsten Otto wrote:
  Most 3D games already have a network of waypoints  in their world
  maps, so computer controlled characters can easily navigate them. You
  could use this for text-user navigation too, but this is usually too
  fine grained (allowing for smooth walking around corners etc).
 
  On the other hand, most VR systems already support viewpoints, which
  are on a higher semantic level, and thus seem a good place to attach
  a textual description of what you can see (at least the static
  scenery part). Unfortunately, viewpoints usually have no navigation
  links between them. So for what you want to do, you need a
  combination of both.

 Yes, although I'll qualify this by saying that waypoint-based node
 networks have a number of drawbacks.  On thinking about it a bit more,
 pathfinding meshes (where you run your pathfinding on the surface of a
 polygon mesh rather than a graph) are more powerful, and solves some of
 the problems you bring up belowe because they define areas rather than
 just points.

Agreed.

  This requires some work, but VOS is flexible enough to support all this.

 Of course :-)  I actually started working on a VOS MUD back in the s3
 days...

Is any of that code still usable?

   You see Gonzo(3d) by the entrace to the Pyramid.
  
   Gonzo(3d) waves to you.
 
  If this works, you do not only see what happens inside your current
  scope, but also what happens in nearby scopes. You either need some
  AOI management for this to work, or extra grouping information on
  each node, i.e. in the entrance node, you can see the hallway and
  vice versa, but not what is going on in the security room on the
  other side of the one-way mirror :-) Of course, you could again
  separate navigation informatioon (waypoints) from area information
  (viewpoints) again for this to work.

 The reason I put that in there is that I've typically found the
 horizon on MUDs to be very limiting.  You are given very little
 awareness of what is going on around you except in the immediate node.
 (Sometimes you get there is a rustling to the south! even though the
 description said south is only five meters down a paved street.)
 Again I think with the proper spatial representation this kind of
 visibility information could be derived automatically based on sector
 adjacency and line-of-sight tests.

Well, you also have shouts, for instance, that are designed to reach a 
certain area around you (or the entire MUD, depending on the implementation).

  A few other text-user commands that may be handy:
 
  follow user - move the text-user's avatar to wherever another
  avatar (text or 3d!) is moving to.
 
  face user - turn the text-user's avatar to face another one. You
  can also do this automatically if you detect a corresponding speech
  pattern like kao: are you there?
 
  approach user - like face, but also move the avatar close to the
  target.
 
  ... and probably more. No need to implement these all at once, better
  have a sort of plug-in system for the text/3d bridge.

 All good suggestions.  For me this discussion is mostly idle
 speculation, because we're focused on other things, but it's a useful
 thought experiment in how semantic attribution of immersive 3D VOS
 spaces could work in practice.  I'd be very happy if someone else wanted
 to pick up and run with this idea, though.

I'll see what I can do in that matter... Don't expect much activity, tho.

-- 
Marcos Marado
Sonaecom IT

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Thinking about Javascript

2007-04-18 Thread Reed Hedges



On Tue, Apr 17, 2007 at 07:27:21AM +, Lalo Martins wrote:
 One problem I have with the pure-js version is the nature of HTTP; either
 the browser would need to keep a persistent connection to the server, like
 some web chat rooms do -- which is error prone (hard to recover from a
 disconnect) and makes the deserialisation a bit more complicated -- or it
 would have to poll, which sucks for a number of other reasons.


Yeah, this is why I was looking for libraries out there that address
these things (so we wouldn't have to).  So far I'm only aware of comet
and dojo (dojotoolkit.com) (the client side).

Probably the best thing is to focus on just updating HTML in the
browser. So if the browser opens that persistent, listening channel,
then Hypervos can listen to the Vobjcets on its behalf, and if it e.g. 
sees a child-inserted and the new object is HTML/XML, and it also has 
an open connection to a listening web browser, to serialize the new 
objects into an HTML fragment and push that to the browser.   
Similar for anything changing in the objects or something removed.
stick the new or changed HTML into the DOM (set innerHTML or something).

Anyway, I'm just trying to gather ideas for future web applications, I
probably can't work on this right away-- but if you want to that would
be great, let me know and we can coordinate.  I've been gathering these
ideas at http://interreality.org/wiki/HyperVosIdeas . Part of my
motivation has been searching for a good web application focused on
Question and Answer customer support, as well as general forum
discussion.

Reed


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Reed Hedges


This would be great for people who primarily want to just chat or be
present in the world while doing other work, so they don't want the full
3D world.

It would also make it possible for blind people to interact in the 3D
world.

Reed



On Tue, Apr 17, 2007 at 01:54:23PM -0400, Peter Amstutz wrote:
 Also something I've wanted to explore is the possibility of creating a 
 semantic representation of a space that is meaningful enough to be 
 navigated from both a MUD-style text interface, while still being able 
 to enter it as a fully 3D immersive world.  You could lay down a node 
 network of positions where the text-user could go, add descriptions to 
 all the objects in the space, and then at each node it would print out 
 the descriptions for things that were nearby.  For example (thinking of 
 the current demo black sun world:
 
 ---
 You are standing on a hill of brown dirt.  To the north is a large 
 white pyramid with an entraceway.  To the east the hill continues.  To 
 the west the hill continues.  To the south is black nothingness.
 
 $ Go north
 
 You are at the entrace to the Pyramid.  To the north is a hallway with 
 a tiled floor.  At the end of the hallway are ramps leading up and to 
 the east and west.  To west is a doorway.

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Reed Hedges



 scenery part). Unfortunately, viewpoints usually have no navigation  
 links between them. So for what you want to do, you need a  
 combination of both.
 
 This requires some work, but VOS is flexible enough to support all this.


Yeah, you would just have the waypoint object type have child links to
other waypoints. You would then combine it with the viewpoint type
which provides the 3D position, orientation, and other spatial info. Or
those two aspects could just be contained in one  object type.

Having a set of exits from a viewpoint might be useful in 3D as well,
as a way to help people navigate 3D spaces.



 You need exit lables on the navigation edges for this. 

This is what the contextual names on child links are for :)


  Gonzo(3d) goes south to the entrace to the Pyramid.
 
 In contrast, this is terribly complicated. Deriving the intention/ 
 activity of a user (thats what you have here) from its raw movements  


We can explicitly represent it by giving the waypoint object type a
child that's a container for avatars who are at that waypoint.


 A few other text-user commands that may be handy:
 
 follow user - move the text-user's avatar to wherever another  
 avatar (text or 3d!) is moving to.
 
 face user - turn the text-user's avatar to face another one. You  
 can also do this automatically if you detect a corresponding speech  
 pattern like kao: are you there?
 
 approach user - like face, but also move the avatar close to the  
 target.


These behaviors would also be useful in 3D too.


Reed


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Marcos Marado
On Wednesday 18 April 2007 16:43, Reed Hedges wrote:
 This would be great for people who primarily want to just chat or be
 present in the world while doing other work, so they don't want the full
 3D world.

 It would also make it possible for blind people to interact in the 3D
 world.

Three great motivations I see for this are:

* the possibility of giving a virtual world for blind people - I know many 
blind people using talkers (no monster killing variants of MUDs) with the 
help of software like Jaws because for them it is the easiest way of 
communication. The problem here is that most people nowadays want more 
attractive clients;

* Ubiquity part I: connect everywhere - How many work collegues you know that 
have their second life client open while working? And how many of them have 
their IM client? I'm allways-present in one virtual world... text-based. I 
wouldn't be online there in work-time if it was a 3D environment;

* Ubiquity part II: connect everything - On the subway, on the bus, on 
weekends and vacations, where I don't have a persistent internet connection, 
I'm used to connect myself via GPRS, either with a computer or a mobile 
device (cellphone in my particular case). I wouldn't connect to a 3D world 
via cellphone (hw limitations) or even over a GPRS connection (too slow). But 
with my SSH client I connect to text-based virtual worlds.

See, text worlds have advantages and disavantages over 3D worlds. What we 
don't have yet (at least that I know of) is a VW where you can connect both 
via a text-based interface and via a 3D application.

-- 
Marcos Marado
Sonaecom IT

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Thinking about Javascript

2007-04-18 Thread Peter Amstutz
Well, one approach would be to treat HTTP as an unreliable/stateless 
channel, implement stateful, connection-oriented sessions on top of 
that, and then have VOS (both on the server side and in the browser) run 
on that layer.  So then it would easy to push updates to the browser by 
queuing messages until the next time the browser checks in.

On Wed, Apr 18, 2007 at 11:39:06AM -0400, Reed Hedges wrote:

 Yeah, this is why I was looking for libraries out there that address
 these things (so we wouldn't have to).  So far I'm only aware of comet
 and dojo (dojotoolkit.com) (the client side).
 
 Probably the best thing is to focus on just updating HTML in the
 browser. So if the browser opens that persistent, listening channel,
 then Hypervos can listen to the Vobjcets on its behalf, and if it e.g. 
 sees a child-inserted and the new object is HTML/XML, and it also has 
 an open connection to a listening web browser, to serialize the new 
 objects into an HTML fragment and push that to the browser.   
 Similar for anything changing in the objects or something removed.
 stick the new or changed HTML into the DOM (set innerHTML or something).
 
 Anyway, I'm just trying to gather ideas for future web applications, I
 probably can't work on this right away-- but if you want to that would
 be great, let me know and we can coordinate.  I've been gathering these
 ideas at http://interreality.org/wiki/HyperVosIdeas . Part of my
 motivation has been searching for a good web application focused on
 Question and Answer customer support, as well as general forum
 discussion.

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Integration of VOS and IRC

2007-04-18 Thread Ken Taylor
Yes! Accessibility in 3d virtual worlds would be *huge*. As far as I know,
no one has done anything like this yet... (and if they have, I would really
like to check it out).

-Ken




 This would be great for people who primarily want to just chat or be
 present in the world while doing other work, so they don't want the full
 3D world.

 It would also make it possible for blind people to interact in the 3D
 world.

 Reed



 On Tue, Apr 17, 2007 at 01:54:23PM -0400, Peter Amstutz wrote:
  Also something I've wanted to explore is the possibility of creating a
  semantic representation of a space that is meaningful enough to be
  navigated from both a MUD-style text interface, while still being able
  to enter it as a fully 3D immersive world.  You could lay down a node
  network of positions where the text-user could go, add descriptions to
  all the objects in the space, and then at each node it would print out
  the descriptions for things that were nearby.  For example (thinking of
  the current demo black sun world:
 
  ---
  You are standing on a hill of brown dirt.  To the north is a large
  white pyramid with an entraceway.  To the east the hill continues.  To
  the west the hill continues.  To the south is black nothingness.
 
  $ Go north
 
  You are at the entrace to the Pyramid.  To the north is a hallway with
  a tiled floor.  At the end of the hallway are ramps leading up and to
  the east and west.  To west is a doorway.

 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d