Re: [vos-d] Online Space

2007-02-12 Thread chris
On 2/13/07, Reed Hedges [EMAIL PROTECTED] wrote:
 chris wrote:
 ...there's a global coordinate system, and a local rendering coordinate
 system...


 So the main thing that you need to do, I guess, is represent your global
 coordinate system not with IEEE floating point numbers (doubles have the
 same problem, just further out), but with a fixed point representation
 (or string even), and be careful in converting them from that
 representation into IEEE floats  but in the local viewpoint-centered
 rendering coordinate system.

 Right?

Yes, as far as coordinate system considerations go, that's pretty much
it. tho it does not matter if you use doubles, quads or fixed - so
long as you maintain sufficient precision and therefore accuracy in
the object system - whatever works best. I do suspect, however, that a
floating point representation will give better scalability in the
general case.

chris

 Reed


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-11 Thread chris
On 2/11/07, Ken Taylor [EMAIL PROTECTED] wrote:

  if your physics - say bouncing boxes like my example - is performed in
  it's own local coordinate space then it could be made consistent every
  time - but I can't see how you would combine the rendering of this in
  realtime with the rendering of the scene

 I don't see why transforming from physics simulation space to world space
 for the purpose of rendering the frame is any more difficult than
 transforming from, say, a model's local coordinate system to world space.

if it's not part of the scene when u simulate the physics then when
you add the objects of the physics sim into the scene (assuming you
have a way to do this over a series of frames) you could have all
sorts of unrealistic things like objects passing thru others, objects
not occluding when they should, no shadows etc.

chris

 Ken


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-10 Thread Péter Tölgyesi

Hello, just some thoughts from a lurker. :-)

What if high level objects and the camera have an extra coarse origin that
is snapped to a grid whose grid distance is a large power of 2, so the
values are exactly representable in floating point variables in the same
scale. Then fine origins are relative to these.
OK, not infinite, but this extends the space somewhat, while keeping nearby
objects accurate.
When an object moves, the coarse origin may need to snap to another grid
point sometimes, and this involves new coordinates for the fine origin.



 In practice, precise consistancy is the *least* of people's worries when
 setting up a rigid body physics simulation; setting up the system takes
 a lot of tweaking just to prevent it from flipping out.

 Here's a harder question: how do you handle the physics for an object
 that's passing through or straddling a portal?





The portal link may need more than the point where the object will appear in
the other space.
The door needs an entry direction vector in space A and an exit direction
vector in space B, and these are made the same from the object's viewpoint.
The speed, angular velocity, etc can be projected.
There may be other problems I can not see at first look though.

Peter
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-10 Thread chris
On 2/10/07, Peter Amstutz [EMAIL PROTECTED] wrote:
 On Wed, Feb 07, 2007 at 08:57:18AM +0900, chris wrote:

  That is not to say this model cannot work as a hybrid system with
  portals at doorways, for space jumps etc. In fact, for very large
  scale solar/galaxy systems you would have to either use very high
  precision in the object system or maybe double precision with portals.
 
  but to get optimal accuracy, scalability etc throughout where the
  avatar travels then the graohics engine should be looking at a
  continous floating origin.

 Don't space-warping portals achive this effect?  When you walk through
 the portal (both the rendering walk as well as the actual avatar
 moving through), the space rendering is now centered on a new coordinate
 system.  Provided your sectors are relatively small, this seems to be
 more or less equivalent to the periodic recentering described in the

Sure it'll fix many problems - just like other segmentation
approaches. It won't solve all, so it depends on what you want
eventually.

 Dungeon Siege paper you posted.  One of the points of the Dungeon Siege
 paper was also that recentering was a relatively expensive operation, so
 you didn't want to do it every frame, but only when the camera crossed
 certain boundaries, so it's not truely continous in the sense of doing
 it before every frame.  Besides, that's complete overkill, since the
 point here is precision problems crop up at distances of 30-40km from
 center (assuming 1 notron = 1m) so it takes a very very large world
 before this becomes a problem (or you're doing a geospatial
 simulation...)


The point of referring to DS was that their segmentation approach was expensive.
All segmentation approaches have to have some mechanism to deal with
the boundaries between segments. If you can create artificial portals
and handle them efficiently then that's ok. But when they occur in
places in free space - overheads and other problems can arise. Like
what happens if you have an NPC on one hill and an avatar on the other
and a segment boundary between. If they are fiiring at each other and
possibly going back and forth across the invisible boundary what do
you do?

 Also, for Interreality, the issue is primarily one of representation,
 since we use an off the shelf 3D engine (Crystal Space).  So my concern
 is how you're going to actually represent those huge worlds (since you
 do have precision problems beyond 30-40km) as a downloaded map, once you
 have that data loaded in, rendering is a separate issue.

I can show that visible artefacts can occur even at one kilometer:
e.g. when there are overlapping surfaces with small separation. A
pretty common thing in a simulated natural environment. then the
physics stuff can be shown to be unpredictable at 10m or 0m if time is
not managed.

 (I haven't had a chance to read those other links you posted, so perhaps
 those explain the idea in more detail).

Np, those papers don't go into depth on how you might implement inside
the graphics engine.

I think it is ok to choose a portal based segmentation system as long
as you can work out a way to move to a floating origin in the future.
As long as youhave an efficient mechanism of itterating over the
objects and can modify the navigation system and viewpoint system then
you should be able to do it without difficulty.

And LOD - the ability to tap into the LOD mechanism for objects and
modify it will be valuable in future. If you can avoid the kind of
problems of DS then you should be ok.

When I finish my thesis (soon!), I'll be looking for a open source 3D
system I can modify and experiment with, so I'll have more to say
then. Atm, my experiments have been at two ends of the spectrum: at
the low level with C/OpenGL and at the other end with working on
scenegrah and x3d browsers from the outside. I'll be looking for a
project and open source community that is happy to support an effort
to create a floating origin version of their system.

chris

 --
 [   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
 [Lead Programmer][Interreality Project][Virtual Reality for the Internet]
 [ VOS: Next Generation Internet Communication][ http://interreality.org ]
 [ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.5 (GNU/Linux)

 iD8DBQFFzJNzaeHUyhjCHfcRAmlDAKCYouXI1BsFG6TtrZWZe/+pOM+CtwCeI3Rr
 mI9o1aiQyp5HRqeyCHUn0Yc=
 =hPSI
 -END PGP SIGNATURE-

 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-10 Thread chris
On 2/10/07, Karsten Otto [EMAIL PROTECTED] wrote:
 I am not quite sure what kind of precision is necessary here... I'd
 expect that it should be enough to center the current sector for
 displaying purposes, and re-center to the new sector once you cross a
 portal boundary. Considering relatively small sectors, I'd imagine an
 error factor of a few centimeters  is not too disturbing when the
 objects in question are 50 meters away. This is probably two pixels
 difference on the average display resolution. I can live with that :-)

What is enough precision is the main problem with all segmentation
approaches - what is enough is worked out from experience/testing or
guessing. But in terms of general simulation there is no one size that
will always work when 10m or less can make a noticeable difference.


 Regarding physics simulation, which (if I understand you correctly)
 suffers the most from matrix creep, well... I am no expert, but
 couldn't you calculate this in a virtual coordinate space, derived
 from the world coordinates in such a fashion that all objects
 involved are close to the center? And then, once you reach some
 stable result, convert the virtual coordinates back to world
 coordinate space and continue from there? That may not be
 particularly precise or realistic, but again, as long as the system
 behaves more or less consistently, I can live with it.

if your physics - say bouncing boxes like my example - is performed in
it's own local coordinate space then it could be made consistent every
time - but I can't see how you would combine the rendering of this in
realtime with the rendering of the scene - unless you artificially
composite painters algorithm style. In that case it would work but all
sorts of rendering things would not be consistent with rest of scene -
shadows, lighting, occlusion etc. And to get the compositing part to
look good might be difficult and slow the performance of your
rendering system. e.g. if you used BSP tree system like fly3d then how
do you composite a physics sequence over 200 frames when it crosses
several partition planes?

chris


 Regards,
 Karsten Otto

 Am 09.02.2007 um 16:29 schrieb Peter Amstutz:

  On Wed, Feb 07, 2007 at 08:57:18AM +0900, chris wrote:
 
  That is not to say this model cannot work as a hybrid system with
  portals at doorways, for space jumps etc. In fact, for very large
  scale solar/galaxy systems you would have to either use very high
  precision in the object system or maybe double precision with
  portals.
 
  but to get optimal accuracy, scalability etc throughout where the
  avatar travels then the graohics engine should be looking at a
  continous floating origin.
 
  Don't space-warping portals achive this effect?  When you walk through
  the portal (both the rendering walk as well as the actual avatar
  moving through), the space rendering is now centered on a new
  coordinate
  system.  Provided your sectors are relatively small, this seems to be
  more or less equivalent to the periodic recentering described in the
  Dungeon Siege paper you posted.  One of the points of the Dungeon
  Siege
  paper was also that recentering was a relatively expensive
  operation, so
  you didn't want to do it every frame, but only when the camera crossed
  certain boundaries, so it's not truely continous in the sense of
  doing
  it before every frame.  Besides, that's complete overkill, since the
  point here is precision problems crop up at distances of 30-40km from
  center (assuming 1 notron = 1m) so it takes a very very large world
  before this becomes a problem (or you're doing a geospatial
  simulation...)
 
  Also, for Interreality, the issue is primarily one of representation,
  since we use an off the shelf 3D engine (Crystal Space).  So my
  concern
  is how you're going to actually represent those huge worlds (since you
  do have precision problems beyond 30-40km) as a downloaded map,
  once you
  have that data loaded in, rendering is a separate issue.
 
  (I haven't had a chance to read those other links you posted, so
  perhaps
  those explain the idea in more detail).
 
  --
  [   Peter Amstutz  ][ [EMAIL PROTECTED] ]
  [ [EMAIL PROTECTED] ]
  [Lead Programmer][Interreality Project][Virtual Reality for the
  Internet]
  [ VOS: Next Generation Internet Communication][ http://
  interreality.org ]
  [ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu
  18C21DF7 ]
 
  ___
  vos-d mailing list
  vos-d@interreality.org
  http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-10 Thread chris
On 2/10/07, Peter Amstutz [EMAIL PROTECTED] wrote:
 On Fri, Feb 09, 2007 at 04:56:39PM +0100, Karsten Otto wrote:
  I am not quite sure what kind of precision is necessary here... I'd
  expect that it should be enough to center the current sector for
  displaying purposes, and re-center to the new sector once you cross a
  portal boundary. Considering relatively small sectors, I'd imagine an
  error factor of a few centimeters  is not too disturbing when the
  objects in question are 50 meters away. This is probably two pixels
  difference on the average display resolution. I can live with that :-)

 You've put your finger on it.  From a rendering perspective, you just
 need enough precision so that the edges between two triangles that are
 adjacent will actually be drawn that way without visible seams or
 cracks.  If you make sure the world is set up in such a way that the
 camera is never too far from the origin, then this isn't a problem.

that's basically right, tho a recent experiment I did showed visible
rendering issues starting at 1km. it all depends on how much time and
efffort and performance you want to devote to an ad-hoc solution:
there will always be ways to defeat it. But everyone else does it this
way - moves the viewpoint through the environment - so I'm really only
a single voice against all the conventional approaches :(

 (Strictly speaking, the camera transform causes the world to be oriented
 around the camera, not to oriented the camera in the world -- but it's
 easier to speak about it as if it were the camera that is moving).

 Human scale is pretty easy to manage.  Geospatial scale, and
 particularly moving between a galactic scale down to a human scale
 smoothly is where things are really difficult.  I'm open to ideas as to
 how to split this, because frankly I don't know how best to compromise
 between a massive scale and a high-detail scale, particularly in terms
 of the representation that gets pushed over the network.

I would look at how they do it in eve-online or one of the star system
based games, if you can find out, and make your object system do
something similar. The display system will always show a subset and
that is where the optimisations occur. Another ref to look at : I
thought O'Neil's on-the-fly system was very good - for a conventional
ad-hoc approach.


 Also, on the topic of precision, something else that hasn't been
 mentioned is cumulative error -- depending on how you work with your
 coordinates, you may start getting artifacts due to the accumulation of
 roundoff errors.  This is fairly managable by having some immutable
 source data from which you do your transforms from, instead doing
 incremental transformations, but still something to watch out for.

Hee hee! That's really what I aim to minimise with the floating origin!!
I don't believe it is so manageable in a conventional origin-relative system.
The spatial error that I talk about generally increases linearly with
distance from origin so that is not such a big problem if you move the
origin every now and then - like every 3 km in MS flight simulator.
But there are a few situations where it can increase in powers of 2 -
rare so you can discount them if you want (I don't, but that's me).

These spatial errors are one of the many contributors to relative
error. The relative error propagates exponentially - and that is the
main problem. So, my contention is: if you minimise the error input
into the relative error equation you minimise the exponential error
form relative error propagation - which is what effects things most in
the end when they get to the last stage of the graphics pipeline. The
more complex your simulation system in the future the more the
relative error impinges on the quality of the sim. So the foundation
you choose to build upon is important.


  Regarding physics simulation, which (if I understand you correctly)
  suffers the most from matrix creep, well... I am no expert, but
  couldn't you calculate this in a virtual coordinate space, derived
  from the world coordinates in such a fashion that all objects
  involved are close to the center? And then, once you reach some
  stable result, convert the virtual coordinates back to world
  coordinate space and continue from there? That may not be
  particularly precise or realistic, but again, as long as the system
  behaves more or less consistently, I can live with it.

 Hmm.  Yes, you probably could do something like that, if you can divide
 up your interacting objects into isolated groups.  Then you could
 simulate those objects independently at the origin, then transform them
 back to the original position.  Seems like a hassle, though.

 In practice, precise consistancy is the *least* of people's worries when
 setting up a rigid body physics simulation; setting up the system takes
 a lot of tweaking just to prevent it from flipping out.

 Here's a harder question: how do you handle the physics for an object
 that's passing 

Re: [vos-d] Online Space

2007-02-10 Thread Ken Taylor

 if your physics - say bouncing boxes like my example - is performed in
 it's own local coordinate space then it could be made consistent every
 time - but I can't see how you would combine the rendering of this in
 realtime with the rendering of the scene

I don't see why transforming from physics simulation space to world space
for the purpose of rendering the frame is any more difficult than
transforming from, say, a model's local coordinate system to world space.

Ken


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-09 Thread Peter Amstutz
On Wed, Feb 07, 2007 at 08:57:18AM +0900, chris wrote:

 That is not to say this model cannot work as a hybrid system with
 portals at doorways, for space jumps etc. In fact, for very large
 scale solar/galaxy systems you would have to either use very high
 precision in the object system or maybe double precision with portals.
 
 but to get optimal accuracy, scalability etc throughout where the
 avatar travels then the graohics engine should be looking at a
 continous floating origin.

Don't space-warping portals achive this effect?  When you walk through 
the portal (both the rendering walk as well as the actual avatar 
moving through), the space rendering is now centered on a new coordinate 
system.  Provided your sectors are relatively small, this seems to be 
more or less equivalent to the periodic recentering described in the 
Dungeon Siege paper you posted.  One of the points of the Dungeon Siege 
paper was also that recentering was a relatively expensive operation, so 
you didn't want to do it every frame, but only when the camera crossed 
certain boundaries, so it's not truely continous in the sense of doing 
it before every frame.  Besides, that's complete overkill, since the 
point here is precision problems crop up at distances of 30-40km from 
center (assuming 1 notron = 1m) so it takes a very very large world 
before this becomes a problem (or you're doing a geospatial 
simulation...)

Also, for Interreality, the issue is primarily one of representation, 
since we use an off the shelf 3D engine (Crystal Space).  So my concern 
is how you're going to actually represent those huge worlds (since you 
do have precision problems beyond 30-40km) as a downloaded map, once you 
have that data loaded in, rendering is a separate issue.

(I haven't had a chance to read those other links you posted, so perhaps 
those explain the idea in more detail).

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-09 Thread Karsten Otto
I am not quite sure what kind of precision is necessary here... I'd  
expect that it should be enough to center the current sector for  
displaying purposes, and re-center to the new sector once you cross a  
portal boundary. Considering relatively small sectors, I'd imagine an  
error factor of a few centimeters  is not too disturbing when the  
objects in question are 50 meters away. This is probably two pixels  
difference on the average display resolution. I can live with that :-)

Regarding physics simulation, which (if I understand you correctly)  
suffers the most from matrix creep, well... I am no expert, but  
couldn't you calculate this in a virtual coordinate space, derived  
from the world coordinates in such a fashion that all objects  
involved are close to the center? And then, once you reach some  
stable result, convert the virtual coordinates back to world  
coordinate space and continue from there? That may not be  
particularly precise or realistic, but again, as long as the system  
behaves more or less consistently, I can live with it.

Regards,
Karsten Otto

Am 09.02.2007 um 16:29 schrieb Peter Amstutz:

 On Wed, Feb 07, 2007 at 08:57:18AM +0900, chris wrote:

 That is not to say this model cannot work as a hybrid system with
 portals at doorways, for space jumps etc. In fact, for very large
 scale solar/galaxy systems you would have to either use very high
 precision in the object system or maybe double precision with  
 portals.

 but to get optimal accuracy, scalability etc throughout where the
 avatar travels then the graohics engine should be looking at a
 continous floating origin.

 Don't space-warping portals achive this effect?  When you walk through
 the portal (both the rendering walk as well as the actual avatar
 moving through), the space rendering is now centered on a new  
 coordinate
 system.  Provided your sectors are relatively small, this seems to be
 more or less equivalent to the periodic recentering described in the
 Dungeon Siege paper you posted.  One of the points of the Dungeon  
 Siege
 paper was also that recentering was a relatively expensive  
 operation, so
 you didn't want to do it every frame, but only when the camera crossed
 certain boundaries, so it's not truely continous in the sense of  
 doing
 it before every frame.  Besides, that's complete overkill, since the
 point here is precision problems crop up at distances of 30-40km from
 center (assuming 1 notron = 1m) so it takes a very very large world
 before this becomes a problem (or you're doing a geospatial
 simulation...)

 Also, for Interreality, the issue is primarily one of representation,
 since we use an off the shelf 3D engine (Crystal Space).  So my  
 concern
 is how you're going to actually represent those huge worlds (since you
 do have precision problems beyond 30-40km) as a downloaded map,  
 once you
 have that data loaded in, rendering is a separate issue.

 (I haven't had a chance to read those other links you posted, so  
 perhaps
 those explain the idea in more detail).

 -- 
 [   Peter Amstutz  ][ [EMAIL PROTECTED] ] 
 [ [EMAIL PROTECTED] ]
 [Lead Programmer][Interreality Project][Virtual Reality for the  
 Internet]
 [ VOS: Next Generation Internet Communication][ http:// 
 interreality.org ]
 [ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu   
 18C21DF7 ]

 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-09 Thread Peter Amstutz
On Fri, Feb 09, 2007 at 04:56:39PM +0100, Karsten Otto wrote:
 I am not quite sure what kind of precision is necessary here... I'd  
 expect that it should be enough to center the current sector for  
 displaying purposes, and re-center to the new sector once you cross a  
 portal boundary. Considering relatively small sectors, I'd imagine an  
 error factor of a few centimeters  is not too disturbing when the  
 objects in question are 50 meters away. This is probably two pixels  
 difference on the average display resolution. I can live with that :-)

You've put your finger on it.  From a rendering perspective, you just 
need enough precision so that the edges between two triangles that are 
adjacent will actually be drawn that way without visible seams or 
cracks.  If you make sure the world is set up in such a way that the 
camera is never too far from the origin, then this isn't a problem. 

(Strictly speaking, the camera transform causes the world to be oriented 
around the camera, not to oriented the camera in the world -- but it's 
easier to speak about it as if it were the camera that is moving).

Human scale is pretty easy to manage.  Geospatial scale, and 
particularly moving between a galactic scale down to a human scale 
smoothly is where things are really difficult.  I'm open to ideas as to 
how to split this, because frankly I don't know how best to compromise 
between a massive scale and a high-detail scale, particularly in terms 
of the representation that gets pushed over the network.


Also, on the topic of precision, something else that hasn't been 
mentioned is cumulative error -- depending on how you work with your 
coordinates, you may start getting artifacts due to the accumulation of 
roundoff errors.  This is fairly managable by having some immutable 
source data from which you do your transforms from, instead doing 
incremental transformations, but still something to watch out for.

 Regarding physics simulation, which (if I understand you correctly)  
 suffers the most from matrix creep, well... I am no expert, but  
 couldn't you calculate this in a virtual coordinate space, derived  
 from the world coordinates in such a fashion that all objects  
 involved are close to the center? And then, once you reach some  
 stable result, convert the virtual coordinates back to world  
 coordinate space and continue from there? That may not be  
 particularly precise or realistic, but again, as long as the system  
 behaves more or less consistently, I can live with it.

Hmm.  Yes, you probably could do something like that, if you can divide 
up your interacting objects into isolated groups.  Then you could 
simulate those objects independently at the origin, then transform them 
back to the original position.  Seems like a hassle, though.

In practice, precise consistancy is the *least* of people's worries when 
setting up a rigid body physics simulation; setting up the system takes 
a lot of tweaking just to prevent it from flipping out.

Here's a harder question: how do you handle the physics for an object 
that's passing through or straddling a portal?

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-06 Thread Peter Amstutz
On Fri, Feb 02, 2007 at 12:15:47PM +0900, chris wrote:

 Yes - that's why we use a single continuous world space. Many systems
 like VGIS divide the earth into fixed sized sectors. This sort of
 segmentation creates many overheads.
 The Dungeon Siege game segmented its world into SiegeNodes, each
 with its own local coordinate space. When the viewpoint crossed a
 boundary between nodes, the local coordinate system changed to that
 of the node being entered and a ``Space Walk'' began.
 The space walk visited each active node and recalculated coordinate
 transforms to shift objects closer to the new local origin. This
 ensured coordinates did not get large enough to cause noticeable spatial
 jitter. It uses considerable processing resources to do space walk and
 the frequency of performing recalculations has to be limited: ``as
 infrequently as possible to
 avoid bogging down the CPU'' {Bilas}:
 http://www.drizzle.com/~scottb/gdc/continuous-world.htm

Okay, I've had a chance to read over and digest the continous world 
document.  As I understand it, the world is basically a set of nodes 
which are connected to form an adjacency graph.  The edges describes how 
the nodes are oriented/transformed in space in relation to each 
surrounding node.  The camera works in the coordinate space of whatever 
particular node it's on, and everything else is recentered relative to 
the current node.

I think this fits in very well with using portals in VOS.  A normal 
portal is a polygon in space which causes the renderer to recursively 
start rendering the sector behind the portal, clipped to the portal 
polygon.  This works nicely for indoor areas because if the portal isn't 
visible, it doesn't have to consider the room behind the portal at all.  
It's also used by some engines to connect indoor and outdoor areas (for 
example, I believe indoor areas in World of Warcraft are portals to a 
separate map, so that a viewer who is outside the building doesn't have 
to consider the building interior in rendering.)

The second kind of portal is a space-warping portal.  This works the 
same as a normal portal, except that a space transform (rotation and 
translation) is applied to the target sector.  This means that target 
sector no longer has to be in the same coordinate system as your current 
space.  Your current space has one origin, the space on the other side 
of the portal has another origin, and they're defined relative to each 
other.  Thus, crossing the portal boundary is in effect recentering the 
entire space.

I've always been against a unified coordinate system for virtual worlds 
for philosophical and pragmatic reasons (you're never going to get 
people to agree on how to allocate space except via some central 
authority), so it's good to consider that this is probably the best 
technical solution as well.

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-06 Thread chris
When I speak of a single continuous world space I am referring only to
the subset of the application that is being used in the display
system: the part of the app that includes the grahics pipeline. This
is where we are forced to used single precision because of hardware
limits and performance and this is where it is good to avoid overheads
or artificial boundaries in an apparently confinuous space (like in
Morrowind, when walking along a path, it would halt and you'd get a
message like loading external environment and then you can
continue). At any one time it is only a subset of the aplication
space.

for more background you may want to look at these papers:
published in Cyberworlds 2006:
http://planet-earth.org/cw06/thorne-CW06.pdf

to be published in the Journal of Ubiquitous Computing and
Intelligence special issue on Cyberworlds:
http://planet-earth.org/ubcw/thorne-UBCW.pdf

published in proceedings of Cyberworlds05:
http://planet-earth.org/cw05/FloatingOrigin.pdf

That is not to say this model cannot work as a hybrid system with
portals at doorways, for space jumps etc. In fact, for very large
scale solar/galaxy systems you would have to either use very high
precision in the object system or maybe double precision with portals.

but to get optimal accuracy, scalability etc throughout where the
avatar travels then the graohics engine should be looking at a
continous floating origin.
The best description of a on-the-fly origin shifting system I have
seen is O'Neil's  A Real-Time Procedural Universe, Part Three:
Matters of Scale:
http://www.gamasutra.com/features/20020712/oneil_01.htm

Although it is not a true continuous floating origin it gives similar effect.

On 2/7/07, Peter Amstutz [EMAIL PROTECTED] wrote:
 On Fri, Feb 02, 2007 at 12:15:47PM +0900, chris wrote:
 
  Yes - that's why we use a single continuous world space. Many systems
  like VGIS divide the earth into fixed sized sectors. This sort of
  segmentation creates many overheads.
  The Dungeon Siege game segmented its world into SiegeNodes, each
  with its own local coordinate space. When the viewpoint crossed a
  boundary between nodes, the local coordinate system changed to that
  of the node being entered and a ``Space Walk'' began.
  The space walk visited each active node and recalculated coordinate
  transforms to shift objects closer to the new local origin. This
  ensured coordinates did not get large enough to cause noticeable spatial
  jitter. It uses considerable processing resources to do space walk and
  the frequency of performing recalculations has to be limited: ``as
  infrequently as possible to
  avoid bogging down the CPU'' {Bilas}:
  http://www.drizzle.com/~scottb/gdc/continuous-world.htm

 Okay, I've had a chance to read over and digest the continous world
 document.  As I understand it, the world is basically a set of nodes
 which are connected to form an adjacency graph.  The edges describes how
 the nodes are oriented/transformed in space in relation to each
 surrounding node.  The camera works in the coordinate space of whatever
 particular node it's on, and everything else is recentered relative to
 the current node.

 I think this fits in very well with using portals in VOS.  A normal
 portal is a polygon in space which causes the renderer to recursively
 start rendering the sector behind the portal, clipped to the portal
 polygon.  This works nicely for indoor areas because if the portal isn't
 visible, it doesn't have to consider the room behind the portal at all.
 It's also used by some engines to connect indoor and outdoor areas (for
 example, I believe indoor areas in World of Warcraft are portals to a
 separate map, so that a viewer who is outside the building doesn't have
 to consider the building interior in rendering.)

 The second kind of portal is a space-warping portal.  This works the
 same as a normal portal, except that a space transform (rotation and
 translation) is applied to the target sector.  This means that target
 sector no longer has to be in the same coordinate system as your current
 space.  Your current space has one origin, the space on the other side
 of the portal has another origin, and they're defined relative to each
 other.  Thus, crossing the portal boundary is in effect recentering the
 entire space.

 I've always been against a unified coordinate system for virtual worlds
 for philosophical and pragmatic reasons (you're never going to get
 people to agree on how to allocate space except via some central
 authority), so it's good to consider that this is probably the best
 technical solution as well.

agreed - as far as the object system, which is the main part of the
application - you need appropriate coordinate system(s) - like lat,
lon, height and reference system for geospatial. And for outer space
some segmentation is likely.

The translation from object system coordinate system to display system
coordinate system happens with the LOD/visibility/active object

Re: [vos-d] Online Space

2007-02-01 Thread Karsten Otto
First off, I'd say the limit is really the coordinate system you use.  
Assuming you have a 4-byte integer value measuring meters, then you  
already can go roughly 2.000.000.000 meters in any direction, which  
well exceeds terrestial distances, but isn't quite enough to take you  
from the Sun to Pluto (iirc, my numbers may be wrong). That is why  
Java3D for example has a 256-bit HiResCoord data type, which is  
sufficient to describe a universe in excess of several billion light  
years across, yet still define objects smaller than a proton. If  
that is still not enough, you could use an arbitrary number of bits,  
which is theoretically limitless.  In practice it will be restricted  
by the amount of bits supplied by your main memory, though :-)

However, people do not like to work with large numbers. It is much  
more convenient to have the coordinates of your world closely around  
(0,0,0). You have that in your pyramid world, and the little penguin  
will want it for its iceberg too, probably on the spot where he keeps  
his bucket of fish. Then you *link* the iceberg into your pyramid  
world, just like a web hyperlink, but with attached coordinates: It  
states that the iceberg is at (x,y,z) in your pyramid coordinate  
system. The peguin does the same for its iceberg world, indicating  
where the pyramid is located in relation to its fish bucket. These  
two numbers should be inverses of each other if you want realism, but  
they don't have to be; it could be much closer from the iceberg to  
the pyramid than vice versa, or not possible at all to go in one  
direction. This kind of free linking is just one possibility however;  
some people may prefer a fixed sized grid for this sort of thing, or  
a more restricted scheme for who may link to what. In any case, there  
should be some halfway point at which the seagulls stop using the  
pyramid server for position tracking and switch over to to the  
iceberg server. This kind of handover is more or less what you do to  
make cellphones work.

Hope this helps a bit!
Karsten Otto (kao)


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-01 Thread Reed Hedges

Karsten and Chris are both right and have insightful comments.

There's no real computational or memory restriction on the size of a 
volume of space *as a volume of space*  Chris is talking about the 
representation of coordinates.

[[I.e. the only reason that a 1x1x1 kilometer space is different from a 
1x1x1 meter space is that the 3 numbers are bigger. It's not like every 
1x1x1 m cube within the 1x1x1 km space needs N bytes of RAM or anything :)]]

In the past we've talked about the problems of resolution of large 
floating point numbers but never came to any solution for that per se, 
but perhaps to someday do automatic subdivision of the space into 
multiple sectors, whenever  a need for a tool like that comes up.  So 
you enter new subbordinate or nested coordinate systems as you move 
around.

If you want to be able to see that whole galaxy in the rendering all at 
once that might be a bit of a challenge, but should be possible to 
figure out. (My guess is that graphics research has already discovered 
some solutions to this?)

Reed



S Mattison wrote:
 This might seem a haphazard or poorly thought out question, but it has
 been long begged by science fiction, and I'm very intrigued to hear
 answers from people who might know how it would be possible...
 
 Forget everything you know about the COD format.
 
 Say, I have a small online world, which looks something like a pyramid
 on top of a hill. Consider the center of the base of this pyramid as
 The Origin Point. Say the extent of the square-shaped land area in
 my world ranges from the virtual X/Y values of +1 to -1. (I know
 nothing about the values of the current pyramid map, but follow me
 on a tangent here...)
 
 After that, say I allow avatars into my world, maybe they look like
 birds of some sort.
 
 Now... and this is where it gets tricky... Say I give them a command
 that allows them to 'fly', or, retain the same Z value, while they
 navigate across the X and Y axis...
 
 Would it be possible to allow my world to have near-infinite values
 for X and Y (At least, as high as modern floating-point variables go)?
 Say; If two avatars float in an opposite direction for hours on end,
 for the span of eight, sixteen, thirty-two hours... How would the
 world need to be programmed so that, assuming they turn around 180 and
 float back, it would take them both exactly the same amount of time to
 get back to their original meeting place?
 
 If Penguin A created his own land-mass 28 hours from the meeting
 point, how could I store it and retain the data in the server,
 assuming said Penguin is capable of finding this point again?
 
 -Steve
 
 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-01 Thread chris
On 2/2/07, Reed Hedges [EMAIL PROTECTED] wrote:

 Karsten and Chris are both right and have insightful comments.

thx Reed :)


 There's no real computational or memory restriction on the size of a
 volume of space *as a volume of space*  Chris is talking about the
 representation of coordinates.

 [[I.e. the only reason that a 1x1x1 kilometer space is different from a
 1x1x1 meter space is that the 3 numbers are bigger. It's not like every
 1x1x1 m cube within the 1x1x1 km space needs N bytes of RAM or anything :)]]

 In the past we've talked about the problems of resolution of large
 floating point numbers but never came to any solution for that per se,
 but perhaps to someday do automatic subdivision of the space into
 multiple sectors, whenever  a need for a tool like that comes up.  So
 you enter new subbordinate or nested coordinate systems as you move
 around.

subdivision of space is the most common approach but it does not give
a true continuous
world space to move around in and has a lot of overheads managing the
segments. There are also a multitude of special case problems that
occur at the boundaries: it can become
 a mess. Artificially managing this thru portals is ok for games but
does not suit all apps - like a virtual earth, for example.

 If you want to be able to see that whole galaxy in the rendering all at
 once that might be a bit of a challenge, but should be possible to
 figure out. (My guess is that graphics research has already discovered
 some solutions to this?)

The best combo of techniques from research IMHO is what I call
origin-centric techniques that build on the concept of a continuous
floating origin (in the client side display system), includes special
management of clip planes and LOD and a slightly different simulation
pipeline architecture end-to-end from server to client.  Plus stuff
like imposters for distant objects in galaxies.

Note since this is all the subject of my thesis I may be considered a
bit biased in this area :)

cheers,

chris

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-01 Thread S Mattison
You mean those penguins are actually three millimeters tall? Omg, MicroPenguins!

 [Notice that we never specify what the units in VOS are. We can call
 them notrons in honor of an original collaborator in the project :)
 As a de-facto convention they would probably be meters in most worlds,
 and TerAngreal's default walking speed is roughly based on that, but
 they don't have to be meters if you don't want to.]

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-01 Thread chris
On 2/2/07, Reed Hedges [EMAIL PROTECTED] wrote:
 chis wrote:
  The best combo of techniques from research IMHO is what I call
  origin-centric techniques that build on the concept of a continuous
  floating origin (in the client side display system), includes special
  management of clip planes and LOD and a slightly different simulation
  pipeline architecture end-to-end from server to client.  Plus stuff
  like imposters for distant objects in galaxies.
 
  Note since this is all the subject of my thesis I may be considered a
  bit biased in this area :)

 Oh, that's great! Please share your expertise.

I can refer you to some papers/presentations/videos etc. The rest will
have to come either from discussions like this or in later material I
write. But first I think I'll begin with posing some thought problems
that you can have a go at answering - I'll give the answers after with
images, code/whatever to back it up.


 So what are some of the requirements on the server/networking?
 (Generally speaking, if that's possible.)

i'll get to that later ...


 One thing that I'd like to have at some point is a way to enter
 another object/space; e.g. when flying around the solar system it's
 really a scale model of sorts, until you decide to descend to the
 surface of a planet.   I guess the planet is basically a hyperlink to
 another version of itself. Perhaps the transition could be triggered
 automatically by proximity too. (though that might be confusing or
 irritating to users.)

That's a nice idea but might be difficult. GeoVRML addressed this in a
different way:
they continuously scaled both avatar size and speed as you moved
towards planet surface
based on height above surface. That only suits some apps tho - like
what if u want to enter a space station and ur avatar was 10 times
bigger than the station!

In theory, scaling of the space (and objects) does not solve the
problem of navigation such large spatial extents because it just
scales the problems with it. however, I have experienced some benefits
in some cases that are currently unexplained.

Basically, I find the main things here are managing the origin, clip
planes (i.e. z buffer resolution), LOD. And if you look at O'Neil's
articles you will see some of this plus stuff on imposters for
planets, stars.

chris

 We humans perceive different levels of scale depending on what the
 objects in question actually are; we can make those levels of scale
 explicit and both integrate navigation of the world in the world itself,
 and avoid the scale/coordinate representation problems (and having to
 manually adjust your movement speed from warp factor 5 to mach to
 walk :)

 [Notice that we never specify what the units in VOS are. We can call
 them notrons in honor of an original collaborator in the project :)
 As a de-facto convention they would probably be meters in most worlds,
 and TerAngreal's default walking speed is roughly based on that, but
 they don't have to be meters if you don't want to.]

 Reed


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-01 Thread Peter Amstutz
What are your thoughts about fixed-point numbers?  If have, say, a 16.16 
fixed-point number and the units are meters, you get a maximum range of 
65 kilometers with a resolution of about 15 micrometers (Reed mentioned 
notrons but in practice meters are the most useful for any kind of 
human-scale modeling).  I'm not a big fan of the one coordinate system 
to rule them all school of virtual worlds, I'm more interested in 
smaller spaces hyper-connected together relative to each other using 
portals, scene graph tricks (the contents of space A are embedded in 
space B at some offset) or just saying the edge of this space is 
adjacent to this other space...

I'm mainly concerned about the network-abstract representation, of 
course.  You still need to have tricks in the renderer (like continous 
recentering) to support huge-space schemes.

Also, with fixed-size sectors, I'm not sure how you would do a really 
huge area like the entire planet earth (although as we've established 
from the discussion, floating point numbers fair little better).  If you 
just connect the edges, that's still many many thousands of sectors.  
Perhaps one way of approaching at it is as a sparse-matrix problem or a 
hash space.

At any rate: coordinate systems are hard.

On Thu, Feb 01, 2007 at 10:01:57PM +0900, chris wrote:

 Don't know about vos yet but there is a general theory answer to this.
 For a space build on
 modern floating-point variables, Ignoring the earlier part about a
 range from -1 to +1, there are some issues about floating point space
 you need to understand.
 
 It is not possible with the conventional navigation rules people
 normally use: you will get
 jittery motion, rendering artefacts and other problems the further you
 go from the origin. Roughly speaking, things tend to vibrate a bit
 around 1-2,000m out then shake a lot more around 40,000m and it gets
 worse. Most ppl will tell you this sort of thing is due to limited
 precision. Although that is true it has a lot more to do with spatial
 resolution and spatial error.
 
 To explain:
 Firstly, around 1.0 the resolution of floating point (x,y,z) space is
 very high: with the difference between one representable number and
 the next being 2.2 x 10^-16. As you get to the radius of the earth
 (about 6.4 x 10^6), the resolution is around 1m for single precision
 floating point coordinates. So the resolution of the space is
 nonuniform - see third slide in:
 http://www.web3d.org/x3d-earth/workshop2006/contributions/PingInteractiveGeoSimFidelityScalability.pdf

 [snip...]

-- 
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


[vos-d] Online Space

2007-01-31 Thread S Mattison
This might seem a haphazard or poorly thought out question, but it has
been long begged by science fiction, and I'm very intrigued to hear
answers from people who might know how it would be possible...

Forget everything you know about the COD format.

Say, I have a small online world, which looks something like a pyramid
on top of a hill. Consider the center of the base of this pyramid as
The Origin Point. Say the extent of the square-shaped land area in
my world ranges from the virtual X/Y values of +1 to -1. (I know
nothing about the values of the current pyramid map, but follow me
on a tangent here...)

After that, say I allow avatars into my world, maybe they look like
birds of some sort.

Now... and this is where it gets tricky... Say I give them a command
that allows them to 'fly', or, retain the same Z value, while they
navigate across the X and Y axis...

Would it be possible to allow my world to have near-infinite values
for X and Y (At least, as high as modern floating-point variables go)?
Say; If two avatars float in an opposite direction for hours on end,
for the span of eight, sixteen, thirty-two hours... How would the
world need to be programmed so that, assuming they turn around 180 and
float back, it would take them both exactly the same amount of time to
get back to their original meeting place?

If Penguin A created his own land-mass 28 hours from the meeting
point, how could I store it and retain the data in the server,
assuming said Penguin is capable of finding this point again?

-Steve

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d