[vos-d] Virtual worlds meeting at siggraph 08

2008-05-21 Thread chris
This is a reminder for those interested in attending/demoing/speaking
at the Multiuser Virtual Environments BOF: MUVE MOOT.

So far we have 8 presenters:
Chris Thorne, PhD Candidate, The University of Western Australia,
http://www.csse.uwa.edu.au/virtual
Tony Parisi, Chief Platform Officer of Vivaty, http://www.vivaty.com/
Michael Wilson, the CEO of There.com,  http://www.there.com/
Doug Twilleager from the Wonderland Group, Sun Microsystems,
https://lg3d-wonderland.dev.java.net/
Rafhael Cedeno, CTO  Co-Founder, The Multiverse Network, Inc.,
http://www.multiverse.net/
Associate Professor Don Brutzman, The Naval Postgraduate School,
Monterey, CA, http://web.nps.navy.mil/~brutzman/
Peter Schickel, CEO of Bitmanagement, http://www.bitmanagement.com/

possible speakers:
Professor Mick Brady, Professor Emeritus, Russell Sage College;
currently Live Teams Manager at the Serious Game Design Institute at
the Santa Barbara City College; Second Life photographer, blogger,
writer and digital artist. ... http://mikimojo.com

Multiuser Virtual Environments BOF: MUVE MOOT
Location:  Los Angeles Convention Center
Room(s):   507
Date(s):Wednesday 13 August
Reservation Time(s):  setup time at 12noon, teardown until 2:45pm
Meeting Time(s): 12:30-2:30pm
Room Setup:   theater style with seating for up to 100.
link: http://www.siggraph.org/s2008/attendees/birds/

Meet and discuss Multiuser Virtual Environments (MUVEs) and
demonstrate your work. Topics of interest can be application related:
such as MUVEs in art, education, entertainment or business, or
technical, such as modelling or protocols for MUVEs. Whether your
interest is in standards or just playing, whether you are involved in
multiuser games, online virtual worlds, collaborative or business
applications, please come an share your thoughts and demo your work!

The main subject I would like to focus on is the influence of 3D
virtual environments on social networking. But it is free format and
you can speak on other subjects. Some topics of interest: virtual
commerce, gameplay, technologies (grid, protocols and 3D graphics),
privacy and protection of virtual property.

Anyone who wishes to present or plans to attend please email me
(dragonmagi (at )  gmail.com) so I can get some idea of numbers.

A possible agenda:

1.  Introductions
2.  Demos
3.  Discussion:
4.  The state of the art for MUVEs
5.  Impact of MUVE and social networking technologies on each other
6.  Virtual commerce
7.  Technologies (e.g. grid, protocols and 3D graphics)
8.  Standards, Platforms and portability
9.  Future directions/Current development plans
10. Announcements

About the Organiser:
Chris Thorne is a PhD Candidate with over 25 years experience in 3D
graphics industry and academia. His PhD research is more about the
fundamentals of MUVEs and simulation (ways to improve quality,
fidelity, scalability, accuracy) than developing MUVES themselves,
although he does have a MUVE in development called The Virtual
Universe Project (http://www.csse.uwa.edu.au/virtual/). He is
interested in the influence of MUVEs and social networking technology
on each other.

About SIGGRAPH:
SIGGRAPH is the worlds largest yearly interactive computer graphics
conference. For more information see: http://www.siggraph.org/about


-- 
Australian Ambassador, Association of Virtual Worlds,
http://associationofvirtualworlds.ning.com/

It be a great secret: there be greater truth at the centre.

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


[vos-d] SIGGRAPH multiuser virtual worlds/applications BOF

2008-03-03 Thread chris
Hello,

for those interested in virtual worlds/applications, a Multiuser Virtual
Worlds Meeting (MUVE Moot) is to be held at SIGGRAPH 2008, please see
http://www.siggraph.org/s2008/attendees/birds/

Location:  Los Angeles Convention Center
Room(s):   507
Date(s):Wednesday 13 August
Reservation Time(s):  setup time at 12noon, teardown until 2:45pm
Meeting Time(s): 12:30-2:30pm
Room Setup:   theater style

Meet and discuss Multiuser Virtual Environments (MUVES) and demonstrate your
work. Topics of interest can be application related: such as MUVEs in art,
education, entertainment or business, or technical, such as modelling or
protocols for MUVEs. Whether your interest is in standards or just playing,
whether you are involved in multiuser games, online virtual worlds,
collaborative or business applications, please come an share your thoughts
and demo your work!

Anyone who plans to come please email me so I can get some idea of numbers.
Please also say if you have something to present and what you would like to
see on the
agenda for discussion.

A possible agenda:

1.Introductions
2.Demos
3.Discussion:
4.Language and protocols
5.Standards
6.Platforms and portability
7.Current development plans
8.Future directions
9.Announcements

cheers,

chris
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] development status

2007-06-07 Thread chris

Hi, just a few comments on other status that may interest.
I have been testing tcp/ip networking with an implementation of the network
sensor nodes in Flux. I have been able to get the basic test examples to
work with two linux servers. An example of the nodes is given below. For the
mass avatar event at siggraph I will be working on modifying the
communications for UDP.

This raises the possibility of testing vos server with these nodes if ur
inteterested.

A description of the test examples and MM server is at
http://www.mediamachines.com/hydra/
my experience with installing / running the server is at:
http://planet-earth.org/smam/fluxServerInstallRun.html
tho u would be wanting to run your own vos server.

I am also running a 2 hr multisuer virtual world BOF at siggraph (August) so
if you want me to demo vos let me know. I would want something that would
not require old browser versions, old java etc if possible because I need to
be able to demo other stuff with current tech as well.

cheers,

chris

?xml version=1.0 encoding=UTF-8?
!DOCTYPE X3D PUBLIC ISO//Web3D//DTD X3D 3.0//EN
http://www.web3d.org/specifications/x3d-3.0.dtd

X3D profile='Immersive' 
Scene
Group
Transform DEF='CUBE_TRANSFORM'
Shape
 containerField='children'
 Appearance
  containerField='appearance'
  Material DEF='Blue'
   containerField='material'
   ambientIntensity='0.200'
   shininess='0.200'
   diffuseColor='0 0 1'/
 /Appearance
 Box
  containerField='geometry'
  size='1 1 1'/
/Shape
/Transform
PlaneSensor DEF='PS'/
/Group

NetworkSensor DEF='ES' networkSensorId='ES'
 field name='translation' type='SFVec3f' accessType='inputOutput'/
 Connection DEF='muserver'
  containerField='connection'
  url='swmp://84.20.147.209:13214/MM-6-12345678'/
/NetworkSensor


ROUTE fromNode='PS' fromField='translation_changed' toNode='ES'
toField='set_translation'/
ROUTE fromNode='ES' fromField='translation_changed' toNode='CUBE_TRANSFORM'
toField='set_translation'/
/Scene
/X3D


On 08/06/07, Peter Amstutz [EMAIL PROTECTED] wrote:


Ok, things have been a bit quiet lately, so I want to let everyone know
what's going on with VOS.

The first bit of news is that starting in July, I will be able to
dedicate half of my time (20-30 hours a week) to VOS development.  I
have made an arrangement with my current employer to work part time, so
this works out well for everyone.

Second, I am continuing to do design work on s5, and have spent some
coding time doing prototyping certain crucial features to better
understand them.  I've made some progress in understanding how better to
structure control flow in the presence of asynchronous requests (in s5,
*all* requests will be asynchronous) by using continuations and futures.
I'm also investigating some ideas for a new security design, which I
hope to write about to the list shortly.

Third, s5 is going to use a Python-based build system called Scons,
which I described in my email about build systems a little while ago.  I
hope that this will enable a degree of automation and cross-platform
support that is impossible with autotools.  I've spent some time hacking
on it and I'm comfortable with it, and it is under active development so
it seems like a safe bet.

Fourth, my intention is that come July, I will hit the ground running
with implementing s5 -- you will start seeing real progress and not just
talk.  My hope is that in as little as a month or two, I can have enough
of the s5 kernel written that other people interested in helping can
contribute.

Fifth, I've set up a task tracking system called XPlanner at
http://interreality.org/xplanner (login: guest, password: guest).  You
can view a detailed breakdown of our development roadmap, and see both
how long we expect things to take and what progress (if any) has been
made.

I think that about covers it.  Comments?

--
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron http://interreality.org/%7Etetron ][
pgpkey:   pgpkeys.mit.edu  18C21DF7 ]


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGaMhzaeHUyhjCHfcRAvU7AKCPAPAnl4kSpM2ygePmufUY/A0XPgCgsNfW
PoQTZ7Vcj94J7BjwAo/CKpA=
=3m95
-END PGP SIGNATURE-

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d





--
http://ping.com.au
http://systemic.com.au
http://planet-earth.org/Rez/RezIndex.html
--
It be a great secret: there be more truth at the centre.
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] A3dl redesign (was Re: Scaling and Origins -- 0.23 vs 0.24)

2007-05-21 Thread chris

All good points IMHO, it would make it easier to import/export/translate
between formats,

chris

On 22/05/07, Ken Taylor [EMAIL PROTECTED] wrote:


Peter Amstutz wrote:
 Since most of the work has gone into the network layer, a lot of stuff
 in the 3D layer was a quick proof of concept rather than being really
 well thought out.  Even if s4 - s5 transition wasn't necessary, an A3DL
 redesign was already on the plan.

There's a lot of unwritten stuff like this for a newbie to the VOS project
like me :)  But it's good to know. I didn't really know how set you guys
were on the current state of a3dl, but how much it seemed like make it
work
easily in CS was the main design objective made me a bit uncomfortable
with
it. But knowing that it was mostly a proof of concept and that a redesign
is
on the plan makes me feel a bit better.

So, I don't know if it's too early to start talking about redesigning
a3dl,
but thinking about it today I had a thought. You guys already want to have
VRML (and eventually X3D) translation to and from VOS/Interreality 3d
data.
Well, why not make it easier by desigining a3dl to be as close to 1:1 with
X3D as possible? The X3D guys are putting a lot of work into hashing out
the
things that an interoperable 3d standard needs, so why re-invent the
various
wheels? I'm not talking about adopting X3D file formats as the interchange
format of course, but rather having the object types and properties have a
1:1 correspondance with those in an X3D scenegraph, and have the formats
for
textures and meshes and shaders and such be X3D compatible.

Of course, it doesn't have to implement *all* of X3D, and will have its
own
extensions on top of what X3D specifies (I'm thinking something like
inter-server portals would probably be very vos-specific). But it might be
a
good idea to ride on top of their basic design. Also, doesn't X3D specify
different optional modules and levels of compliance? You could have that
kind of information published by servers and clients, even, to help with
service discovery. But I actually don't know too much about X3D at the
moment -- learning it is on my ever-expanding todo list :)

There will probably be some extra work in making crystal space loaders for
a3dl if it's done this way, but it's probably better for the VOS standard
in
the long run.

Of course, if there's some reason you guys don't want to do an
X3D-inspired
design and think it'd be better to start from scratch with the 3d scene
data
I would be really interested in your opinions. Does X3D have any serious
flaws that would hold it back from being the right data model for
Interreality 3D?

-Ken


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d





--
http://ping.com.au
http://systemic.com.au
http://planet-earth.org/Rez/RezIndex.html
--
It be a great secret: there be more truth at the centre.
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Wanna help the Mass Avatar Mash?

2007-05-15 Thread chris

On 15/05/07, Reed Hedges [EMAIL PROTECTED] wrote:



We don't have specific plans for H-anim and VOS. We haven't designed
how jointed, animateable geometry will work in VOS yet.  Chris is just
keeping us informed about possible things to do (thanks Chris) I think.



np, and the other reason is that  we should be able to get a  X3D client
talking to the
VOS server via the network nodes and with a client side proto wrapper to
handle the VOS
specific protocol.

At this point, we plan on having a general VRML server for VOS that

exposes a running VRML scene in VOS, but I've been lazy and distracted
by web stuff and haven't worked on it for a while. :)



how terrible of you!

If anyone is interested in talking about how jointed, animated geometry

will work (for both humanoid avatars and other stuff) will work, go
ahead :)



still dark magic 2me ,

chris

Reed




On Sat, May 12, 2007 at 12:18:32PM -0700, dan miller wrote:
 can someone sum up what's going on with this project?  Is there going to
be
 some sort of H-anim importer to VOS?  Or some other X3D compatibility
 strategy?

 -dan

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d





--
http://www.planet-earth.org
http://ping.com.au
http://systemic.com.au
http://planet-earth.org/Rez/RezIndex.html
--
It be a great secret: there be more truth at the centre.
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


[vos-d] SMAM: request for avatar selection page developer

2007-05-14 Thread chris

Hi again, for the event we will need a page ppl can go to to select their av
and enter the world. The avatars available will be the ones tested and
proven for the event.

The catch is that this page will have to have a back end part that
communicates the use id and avatar id to a server. There are many ways we
can do this, from writing to a file (which server code can read), database
or sending over the network. I don't want to use a database just for the
event so file or comms are the choices.

There will need to be a login, of course, but security is proly not
necessary: just unique login names and maybe password. But email and email
verification would be a good idea for preventing bots.

So, if you want to put up their hand for this task, pls let me know,

thanks,

chris
--
http://www.planet-earth.org
http://ping.com.au
http://systemic.com.au
http://planet-earth.org/Rez/RezIndex.html
--
It be a great secret: there be more truth at the centre.
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


[vos-d] NetworkSensor note to abnet, vos, vrspace, cyworx, deepmatrix etc

2007-05-13 Thread chris

Hi to all those ppl in the MU VR projects I listed and others I did not
list.

I thought I should say something about the NetworkSensor being proposed for
X3D standardisation.

Although it is now in the working group process and I cannot disclose all
the details of the WG's current version until it has passed  the relevant
approvals, there is one general point I thought I should clarify: the
proposed nodes will not block use with  protocols from abnet or vrspace or
whatever server. They enable such use. The specification allows you to
identify the application protocol,
say abnet, vrspace or swmp etc to be used over tcp/ip or UDP etc.
Therefore, if you have a client
Browser that can process a protocol for abnet, vrspace or swmp etc, it
will be able to talk to the corresponding server.

cheers,

chris

--
http://www.planet-earth.org
http://ping.com.au
http://systemic.com.au
http://planet-earth.org/Rez/RezIndex.html
--
It be a great secret: there be more truth at the centre.
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Wham! SMAM! Avatar Jam!

2007-05-05 Thread chris

For this event OTTOMH, here is a first cut at things that have to be
accomplished:

I will use the headings as an outline to this document:
http://www.planet-earth.org/sg07/SMAM07.html

where I'll just start blatting information to as I find it.

1. May
1.1 Find/allocate some servers for prototyping.
1.2 Collect together a set of public avatars to use and test them (we want
to ensure there are no scripts that would bog a client down and they all
basically work).
1.3 Work out a simple protocol for streaming avatar movements over tcp/ip or
udp (or both). Since MediaMachines have a demonstrated prototype protocol
(SWaMP), implemented on Flux: , that would be a good place to start.
1.4 Choose one or more server software components to test client-server
comms and server-server comms.
1.5 Choose some client software to run tests on.
1.6 Run server-server tests and client-server tests. This will be to shake
out the various tools / technologies we have to choose from.
1.7 Analysis, planning next month.
1.8 Organise booth(s) at siggraph show floor. My company (Systemic) is
putting some money in, what about yours?
1.9 UI for login and avatar selection.

2. June

2.1 First single server-client multiple avatar tests. There will probably be
multiple isolated tests like this.
2.2 User interface tests.
2.3 Client interaction tests.
2.4 Second single server-client mass avatar tests. There will probably be
multiple isolated tests like this.
2.5 Analysis.
2.6 Planning for July.

3. July

3.1 First serious test of mass avatar collaboration at a selected hour.
3.2 Second serious test of mass avatar collaboration at a selected hour.
3.3 Analysis.
3.4 Planning for the conference.

4. August

4.1 Conference demonstration 5th-8th.
4.2  MUVEW Networking BOF (Thursday 10:30-12:30).

Anything else?,

regards,

chris

--
http://www.planet-earth.org
http://ping.com.au
http://systemic.com.au
http://planet-earth.org/Rez/RezIndex.html
--
It be a great secret: there be more truth at the centre.
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Flux Worlds Server Announcement

2007-03-30 Thread chris

Some good point Ken, and I think some should find their way into the NWWG
requirements.

I would add that the protocol should support the efficient implementation of
a heartbeat.
This sounds pretty specific but I think the abitily of a system to know that
the server
and clients are responding and how long it is taking them to respond is
important.
Single byte or word UDP payloads wrapped in a v small packet should suffice
for this.

I use a use case based on eve-online: think of a player who has invested
hundreds of millions of
isk (in-game currency) on a battleship and 1.5 years training/playing to get
there.
In a battle with NPC pirates the server lags badly. Madly typing at the
keyboard produces no response and
he cannot get his active armor or weapons online. Meanwhile the server the
NPC pirates
are destroying the ship with no status update at the client. The player gets
s couple of updates:
one showing his ship 3/4 destroyed - he tries warping out but the server
does not respond in time.
His next update is pieces of ex- battleship.
Now in this scenario, if the heartbeat system is not in place then there no
indication the ship was lost
through player incompetence or because of a server side issue. Sure, the
vendor may have stats on server load
that would back a petition from the player for his ship back, but I think a
heartbeat system would be valuable in this case.

cheers,

chris

On 31/03/07, Ken Taylor [EMAIL PROTECTED] wrote:


Remember, the Metaverse needs open protocols. Without them... everything
else is Just a World.

I agree with this, and I'm glad that more and more people are realizing
it.
However, though it's necessary, it's not sufficient to create the
metaverse. Some other requirements I would use to evaluate any
metaverse
system:

- Not only can anyone run a server, but they can easily interlink and
users
can freely traverse the spaces between different servers, with the
potential
for seamlessly connecting virtual spaces (eg, with portals).
- The protocol is future-proof and can keep up with developments in
technology and new ideas for interaction while maintaining backwards
compatibility and a reasonable experience for those with hardware or
bandwidth that can't support the latest-and-greatest
- The protocol should support use creation and ownership of content,
including a flexible scripting system, and the ability to transfer
user-created content between servers. It should support collaborative
editing and interaction with content.
- The protocol is highly extensible through 3rd-party plugins and not
locked-in to whatever the committee decided at standardization time was,
for example, the best parameterized-avatar system/voice-chat
system/streaming-video protocol/physics system/what-have-you. However, at
the same time, there should be a robust baseline spec that allows all
users to have a decent experience even with no plugins added.

I definitely see VOS as having the potential to meet all these
requirements
and beyond. I can't really tell from the press release how far they are
planning to go with flux worlds. My guess is it's going to be
yet-another-shared-space-server, this time based on X3D and easily
integrated with web pages.

So it'll probably be really neat, but not the metaverse yet ;)

-Ken

- Original Message -
From: Tony Parisi [EMAIL PROTECTED]
To: 'x3d-public list' [EMAIL PROTECTED]; 'www-vrml'
[EMAIL PROTECTED]
Sent: Friday, March 30, 2007 2:08 PM
Subject: [www-vrml] Flux Worlds Server Announcement


 Folks,

 We've been up to something over here - thought I would tell you about it
 before you heard it on the street.

 Media Machines has been developing a multi-user server based on a new
 protocol that we intend to put out into the open. We have dubbed it
Simple
 Wide Area Multi-User Protocol, or SWMP (pronounced swamp). The intent
is
 to work with Web3D and potentially other organizations to standardize
SWMP.
 We will also supply a basic open source implementation. Our overriding
 goal-- one that we are pursuing with total passion and vigor-- is to
create
 an open infrastructure for the Metaverse.

 We have wrapped SWMP into a server product called Flux Worlds. Flux
Worlds
 is currently in alpha test. While the product is still several weeks
away
 from beta test, we announced it yesterday with the goal of attracting
early
 signups for the beta. We are also integrating a prototype of the new X3D
 networking nodes being developing by the Networking Working Group, right
 into Flux Player. The results look promising.

 Anyway, here is the announcement. We would love to have you be part of
the
 beta when it's ready!

 http://www.mediamachines.com/press/pressrelease-03292007.php

 Remember, the Metaverse needs open protocols. Without them... everything
 else is Just a World.

 Yours virtually,
 Tony


-
 Tony
Parisi  415-902-8002
 President/CEO

Re: [vos-d] blog mention of VOS

2007-03-13 Thread chris

On 09/03/07, Peter Amstutz [EMAIL PROTECTED] wrote:


This made me smile:
http://slgames.wordpress.com/2007/03/05/alternatives-to-second-life/

``Virtual Object System - Worthy of note purely because they share the
dream of creating The Metaverse.  There's not really a lot to see yet,
but the reason is that they're taking a very from the ground up
approach.  IF this every lauches, it will end up being fast and
reliable.  Their wiki is active, so it's worth keeping an eye on.  If
they every decide to go for VC funding, they could crush everything
else.''

There is also a comment on the blog post which clarifies a few points
the original author made -- whoever CrystalShard Foo is, thanks :-)

Regarding the last point (the funding, not the crushing), I am currently
working on writing up development and design plans and working towards
the actual proposal we will present to potential funding sources.  If
anyone has any special experience writing proposals or working with
investors, please contact me.  Since we're going for a distinctly
nontraditional business model (open source, open development, community
involvement early and as much as possible) some of this is uncharted
territory...



Hi, I am a bit behind on these emails so may be saying things already
covered but,
One thing that may help:
We once got some funding from our telco mainly *because* we were open source
etc.
They wanted  technology that would look attractive to businesses  to take up
and
the low cost of an open tech, it's long term maintainability were some
strong points in favour.
The pitch was to offer technology that would lead to higher use/take up of
broadband for
rich media applications. This helps broadband businesses to make money.

The other thing is that other businesses can now see examples of business
models
that started off offering free technology and now make money from some added
value/premium accounts etc.
SL, google earth, ... etc.

chris

[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]

[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFF8OamaeHUyhjCHfcRAn+zAKCMWF5VLbeheE7eEQrdHZzq8jJgvQCgpPBL
eK+PRroiLOpzp8E+8eJP438=
=fpr4
-END PGP SIGNATURE-

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d





--
http://www.planet-earth.org
http://ping.com.au
--
It be a great secret: there be more truth at the centre.
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


[vos-d] The beginning of true MU X3D possibilities

2007-02-27 Thread chris

Hi everyone,

I case u haven't seen my big list posts, there is now a network working
group for X3D aimed at putting support into X3D for
things like routing events across network connections like you currently do
with internal ROUTES. Of course, it will allow
direct support for chat and game style comms internally from the Browser
too. There is already an implementation being worked on by MediaMachines for
their Flux Player (and other projects) - they are presenting a paper on it
at the upcoming web3d conference in April.

Things are finally moving ahead in the web3d networking area: just think:
ability to do low latency UDP or full featured http without having to use a
proprietary implementation or go via some difficult external API layers. So
if you want to support / influence the development of this part of the
spec please join the working group.

cheers,

chris
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] The beginning of true MU X3D possibilities

2007-02-27 Thread chris

On 2/28/07, Peter Amstutz [EMAIL PROTECTED] wrote:


Is there a web page/mailing list set up for this working group?  I'm



there is a link to working groups at:
http://www.web3d.org/x3d/workgroups/

which has links on how to join underneath.


interested in seeing what they're working on.  My understanding, though,

was that there are some fundamental design constraints of X3D that make
a fully shared environment exceedingly difficult.  The network node that
you describe just bridges events between x3d browsers, which still dumps



No real constraints - it is general networking capability and routing
between Browsers is only one of the applications.
You could do chat with an XMPP chat client, communicate with a server thru a
port, send binary encoded
bytes to / from anything.

It is designed to generate events in/out of the sensor node but there is
complete flexibility as to what you map to what fields and it allows for
either sending X3D encoded stuff or a protocol of your own making. You
could, for example, send packets containing two integers and have them
routed to an MFInt32.

more details here: http://planet-earth.org/x3d/networkSensor.html
and:
http://planet-earth.org/x3d/networkSensorProposal.html
http://planet-earth.org/x3d/UIUseCase.html
http://planet-earth.org/x3d/ExampleNetworkSensor.html

The SWAMP protocol being implemented by MediaMachines is specific to using
the X3D scenegraph model - which has some advantages - but that would be
separate from, say, a chat channel which could be running on another port.

hope that is clear enough,

chris


the entire burden of synchronization on the x3d author...


On Tue, Feb 27, 2007 at 05:44:03PM +0900, chris wrote:
 Hi everyone,

 I case u haven't seen my big list posts, there is now a network working
 group for X3D aimed at putting support into X3D for
 things like routing events across network connections like you currently
do
 with internal ROUTES. Of course, it will allow
 direct support for chat and game style comms internally from the Browser
 too. There is already an implementation being worked on by MediaMachines
for
 their Flux Player (and other projects) - they are presenting a paper on
it
 at the upcoming web3d conference in April.

 Things are finally moving ahead in the web3d networking area: just
think:
 ability to do low latency UDP or full featured http without having to
use a
 proprietary implementation or go via some difficult external API layers.
So
 if you want to support / influence the development of this part of the
 spec please join the working group.

 cheers,

 chris

 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


--
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFF5ElqaeHUyhjCHfcRAhJzAJ43YKsgrZzAtKgne1NRKnwzXB8JIACfTIiV
mQ8vOVeXkP4002UnS6RCFQs=
=6njc
-END PGP SIGNATURE-

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Physics Braindump

2007-02-25 Thread chris

Do you have UDP implemented?
I imagine the physics would require fast client-server UDP link,

chris

On 2/26/07, Peter Amstutz [EMAIL PROTECTED] wrote:


Since VOS was originally conceived as a peer-to-peer system, we had this
idea that we could do client-based physics, but that idea quickly breaks
down when you have more than one client applying force to a single
object.  So it will probably end up being something like server-based
simulation + client side prediction.  Prediction may be as simple as
sending linear/rotational velocity and extrapolating from that, perhaps
with collision detection so people don't appear to run through things...

One thing I've come to realize is over the Internet lag is often so bad
that close synchronization is impossible and the best you can do is make
it look good on each client and just hope it's close enough.

On Thu, Feb 22, 2007 at 08:41:09PM -0800, Ken Taylor wrote:
 For some reason I got physics on the brain this week, so I kinda went
crazy
 and added a bunch of thoughts to
 http://interreality.org/cgi-bin/moinwiki/moin.cgi/PhysicsInVos ...
mostly
 about client-side prediction, intended-movement representation, and
using
 access control permissions to enforce a sector physics simulation.  I'm
no
 physics simulation expert by any means, and I still have a lot to learn
 about VOS, but I got a good picture in my head of how VOS physics could
work
 themselves out. I was inspired by http://www.gaffer.org/game-physics/
 (especially the article on network physics) and
 http://developer.valvesoftware.com/wiki/Lag_Compensation ... feel free
to
 comment/criticize/refactor/ignore :)

 -Ken


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

--
[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.5 (GNU/Linux)

iD8DBQFF4mUGaeHUyhjCHfcRAm9HAJ9MNHkyTqzUytHDzXuv0Wp15CY7RgCeKcK+
HcChaRQoyhCZE+LlXkY5j2I=
=YMgX
-END PGP SIGNATURE-

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-12 Thread chris
On 2/13/07, Reed Hedges [EMAIL PROTECTED] wrote:
 chris wrote:
 ...there's a global coordinate system, and a local rendering coordinate
 system...


 So the main thing that you need to do, I guess, is represent your global
 coordinate system not with IEEE floating point numbers (doubles have the
 same problem, just further out), but with a fixed point representation
 (or string even), and be careful in converting them from that
 representation into IEEE floats  but in the local viewpoint-centered
 rendering coordinate system.

 Right?

Yes, as far as coordinate system considerations go, that's pretty much
it. tho it does not matter if you use doubles, quads or fixed - so
long as you maintain sufficient precision and therefore accuracy in
the object system - whatever works best. I do suspect, however, that a
floating point representation will give better scalability in the
general case.

chris

 Reed


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-11 Thread chris
On 2/11/07, Ken Taylor [EMAIL PROTECTED] wrote:

  if your physics - say bouncing boxes like my example - is performed in
  it's own local coordinate space then it could be made consistent every
  time - but I can't see how you would combine the rendering of this in
  realtime with the rendering of the scene

 I don't see why transforming from physics simulation space to world space
 for the purpose of rendering the frame is any more difficult than
 transforming from, say, a model's local coordinate system to world space.

if it's not part of the scene when u simulate the physics then when
you add the objects of the physics sim into the scene (assuming you
have a way to do this over a series of frames) you could have all
sorts of unrealistic things like objects passing thru others, objects
not occluding when they should, no shadows etc.

chris

 Ken


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-10 Thread chris
On 2/10/07, Peter Amstutz [EMAIL PROTECTED] wrote:
 On Wed, Feb 07, 2007 at 08:57:18AM +0900, chris wrote:

  That is not to say this model cannot work as a hybrid system with
  portals at doorways, for space jumps etc. In fact, for very large
  scale solar/galaxy systems you would have to either use very high
  precision in the object system or maybe double precision with portals.
 
  but to get optimal accuracy, scalability etc throughout where the
  avatar travels then the graohics engine should be looking at a
  continous floating origin.

 Don't space-warping portals achive this effect?  When you walk through
 the portal (both the rendering walk as well as the actual avatar
 moving through), the space rendering is now centered on a new coordinate
 system.  Provided your sectors are relatively small, this seems to be
 more or less equivalent to the periodic recentering described in the

Sure it'll fix many problems - just like other segmentation
approaches. It won't solve all, so it depends on what you want
eventually.

 Dungeon Siege paper you posted.  One of the points of the Dungeon Siege
 paper was also that recentering was a relatively expensive operation, so
 you didn't want to do it every frame, but only when the camera crossed
 certain boundaries, so it's not truely continous in the sense of doing
 it before every frame.  Besides, that's complete overkill, since the
 point here is precision problems crop up at distances of 30-40km from
 center (assuming 1 notron = 1m) so it takes a very very large world
 before this becomes a problem (or you're doing a geospatial
 simulation...)


The point of referring to DS was that their segmentation approach was expensive.
All segmentation approaches have to have some mechanism to deal with
the boundaries between segments. If you can create artificial portals
and handle them efficiently then that's ok. But when they occur in
places in free space - overheads and other problems can arise. Like
what happens if you have an NPC on one hill and an avatar on the other
and a segment boundary between. If they are fiiring at each other and
possibly going back and forth across the invisible boundary what do
you do?

 Also, for Interreality, the issue is primarily one of representation,
 since we use an off the shelf 3D engine (Crystal Space).  So my concern
 is how you're going to actually represent those huge worlds (since you
 do have precision problems beyond 30-40km) as a downloaded map, once you
 have that data loaded in, rendering is a separate issue.

I can show that visible artefacts can occur even at one kilometer:
e.g. when there are overlapping surfaces with small separation. A
pretty common thing in a simulated natural environment. then the
physics stuff can be shown to be unpredictable at 10m or 0m if time is
not managed.

 (I haven't had a chance to read those other links you posted, so perhaps
 those explain the idea in more detail).

Np, those papers don't go into depth on how you might implement inside
the graphics engine.

I think it is ok to choose a portal based segmentation system as long
as you can work out a way to move to a floating origin in the future.
As long as youhave an efficient mechanism of itterating over the
objects and can modify the navigation system and viewpoint system then
you should be able to do it without difficulty.

And LOD - the ability to tap into the LOD mechanism for objects and
modify it will be valuable in future. If you can avoid the kind of
problems of DS then you should be ok.

When I finish my thesis (soon!), I'll be looking for a open source 3D
system I can modify and experiment with, so I'll have more to say
then. Atm, my experiments have been at two ends of the spectrum: at
the low level with C/OpenGL and at the other end with working on
scenegrah and x3d browsers from the outside. I'll be looking for a
project and open source community that is happy to support an effort
to create a floating origin version of their system.

chris

 --
 [   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
 [Lead Programmer][Interreality Project][Virtual Reality for the Internet]
 [ VOS: Next Generation Internet Communication][ http://interreality.org ]
 [ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.5 (GNU/Linux)

 iD8DBQFFzJNzaeHUyhjCHfcRAmlDAKCYouXI1BsFG6TtrZWZe/+pOM+CtwCeI3Rr
 mI9o1aiQyp5HRqeyCHUn0Yc=
 =hPSI
 -END PGP SIGNATURE-

 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-10 Thread chris
On 2/10/07, Karsten Otto [EMAIL PROTECTED] wrote:
 I am not quite sure what kind of precision is necessary here... I'd
 expect that it should be enough to center the current sector for
 displaying purposes, and re-center to the new sector once you cross a
 portal boundary. Considering relatively small sectors, I'd imagine an
 error factor of a few centimeters  is not too disturbing when the
 objects in question are 50 meters away. This is probably two pixels
 difference on the average display resolution. I can live with that :-)

What is enough precision is the main problem with all segmentation
approaches - what is enough is worked out from experience/testing or
guessing. But in terms of general simulation there is no one size that
will always work when 10m or less can make a noticeable difference.


 Regarding physics simulation, which (if I understand you correctly)
 suffers the most from matrix creep, well... I am no expert, but
 couldn't you calculate this in a virtual coordinate space, derived
 from the world coordinates in such a fashion that all objects
 involved are close to the center? And then, once you reach some
 stable result, convert the virtual coordinates back to world
 coordinate space and continue from there? That may not be
 particularly precise or realistic, but again, as long as the system
 behaves more or less consistently, I can live with it.

if your physics - say bouncing boxes like my example - is performed in
it's own local coordinate space then it could be made consistent every
time - but I can't see how you would combine the rendering of this in
realtime with the rendering of the scene - unless you artificially
composite painters algorithm style. In that case it would work but all
sorts of rendering things would not be consistent with rest of scene -
shadows, lighting, occlusion etc. And to get the compositing part to
look good might be difficult and slow the performance of your
rendering system. e.g. if you used BSP tree system like fly3d then how
do you composite a physics sequence over 200 frames when it crosses
several partition planes?

chris


 Regards,
 Karsten Otto

 Am 09.02.2007 um 16:29 schrieb Peter Amstutz:

  On Wed, Feb 07, 2007 at 08:57:18AM +0900, chris wrote:
 
  That is not to say this model cannot work as a hybrid system with
  portals at doorways, for space jumps etc. In fact, for very large
  scale solar/galaxy systems you would have to either use very high
  precision in the object system or maybe double precision with
  portals.
 
  but to get optimal accuracy, scalability etc throughout where the
  avatar travels then the graohics engine should be looking at a
  continous floating origin.
 
  Don't space-warping portals achive this effect?  When you walk through
  the portal (both the rendering walk as well as the actual avatar
  moving through), the space rendering is now centered on a new
  coordinate
  system.  Provided your sectors are relatively small, this seems to be
  more or less equivalent to the periodic recentering described in the
  Dungeon Siege paper you posted.  One of the points of the Dungeon
  Siege
  paper was also that recentering was a relatively expensive
  operation, so
  you didn't want to do it every frame, but only when the camera crossed
  certain boundaries, so it's not truely continous in the sense of
  doing
  it before every frame.  Besides, that's complete overkill, since the
  point here is precision problems crop up at distances of 30-40km from
  center (assuming 1 notron = 1m) so it takes a very very large world
  before this becomes a problem (or you're doing a geospatial
  simulation...)
 
  Also, for Interreality, the issue is primarily one of representation,
  since we use an off the shelf 3D engine (Crystal Space).  So my
  concern
  is how you're going to actually represent those huge worlds (since you
  do have precision problems beyond 30-40km) as a downloaded map,
  once you
  have that data loaded in, rendering is a separate issue.
 
  (I haven't had a chance to read those other links you posted, so
  perhaps
  those explain the idea in more detail).
 
  --
  [   Peter Amstutz  ][ [EMAIL PROTECTED] ]
  [ [EMAIL PROTECTED] ]
  [Lead Programmer][Interreality Project][Virtual Reality for the
  Internet]
  [ VOS: Next Generation Internet Communication][ http://
  interreality.org ]
  [ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu
  18C21DF7 ]
 
  ___
  vos-d mailing list
  vos-d@interreality.org
  http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-10 Thread chris
 through or straddling a portal?

:) amen!

chris



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-06 Thread chris
When I speak of a single continuous world space I am referring only to
the subset of the application that is being used in the display
system: the part of the app that includes the grahics pipeline. This
is where we are forced to used single precision because of hardware
limits and performance and this is where it is good to avoid overheads
or artificial boundaries in an apparently confinuous space (like in
Morrowind, when walking along a path, it would halt and you'd get a
message like loading external environment and then you can
continue). At any one time it is only a subset of the aplication
space.

for more background you may want to look at these papers:
published in Cyberworlds 2006:
http://planet-earth.org/cw06/thorne-CW06.pdf

to be published in the Journal of Ubiquitous Computing and
Intelligence special issue on Cyberworlds:
http://planet-earth.org/ubcw/thorne-UBCW.pdf

published in proceedings of Cyberworlds05:
http://planet-earth.org/cw05/FloatingOrigin.pdf

That is not to say this model cannot work as a hybrid system with
portals at doorways, for space jumps etc. In fact, for very large
scale solar/galaxy systems you would have to either use very high
precision in the object system or maybe double precision with portals.

but to get optimal accuracy, scalability etc throughout where the
avatar travels then the graohics engine should be looking at a
continous floating origin.
The best description of a on-the-fly origin shifting system I have
seen is O'Neil's  A Real-Time Procedural Universe, Part Three:
Matters of Scale:
http://www.gamasutra.com/features/20020712/oneil_01.htm

Although it is not a true continuous floating origin it gives similar effect.

On 2/7/07, Peter Amstutz [EMAIL PROTECTED] wrote:
 On Fri, Feb 02, 2007 at 12:15:47PM +0900, chris wrote:
 
  Yes - that's why we use a single continuous world space. Many systems
  like VGIS divide the earth into fixed sized sectors. This sort of
  segmentation creates many overheads.
  The Dungeon Siege game segmented its world into SiegeNodes, each
  with its own local coordinate space. When the viewpoint crossed a
  boundary between nodes, the local coordinate system changed to that
  of the node being entered and a ``Space Walk'' began.
  The space walk visited each active node and recalculated coordinate
  transforms to shift objects closer to the new local origin. This
  ensured coordinates did not get large enough to cause noticeable spatial
  jitter. It uses considerable processing resources to do space walk and
  the frequency of performing recalculations has to be limited: ``as
  infrequently as possible to
  avoid bogging down the CPU'' {Bilas}:
  http://www.drizzle.com/~scottb/gdc/continuous-world.htm

 Okay, I've had a chance to read over and digest the continous world
 document.  As I understand it, the world is basically a set of nodes
 which are connected to form an adjacency graph.  The edges describes how
 the nodes are oriented/transformed in space in relation to each
 surrounding node.  The camera works in the coordinate space of whatever
 particular node it's on, and everything else is recentered relative to
 the current node.

 I think this fits in very well with using portals in VOS.  A normal
 portal is a polygon in space which causes the renderer to recursively
 start rendering the sector behind the portal, clipped to the portal
 polygon.  This works nicely for indoor areas because if the portal isn't
 visible, it doesn't have to consider the room behind the portal at all.
 It's also used by some engines to connect indoor and outdoor areas (for
 example, I believe indoor areas in World of Warcraft are portals to a
 separate map, so that a viewer who is outside the building doesn't have
 to consider the building interior in rendering.)

 The second kind of portal is a space-warping portal.  This works the
 same as a normal portal, except that a space transform (rotation and
 translation) is applied to the target sector.  This means that target
 sector no longer has to be in the same coordinate system as your current
 space.  Your current space has one origin, the space on the other side
 of the portal has another origin, and they're defined relative to each
 other.  Thus, crossing the portal boundary is in effect recentering the
 entire space.

 I've always been against a unified coordinate system for virtual worlds
 for philosophical and pragmatic reasons (you're never going to get
 people to agree on how to allocate space except via some central
 authority), so it's good to consider that this is probably the best
 technical solution as well.

agreed - as far as the object system, which is the main part of the
application - you need appropriate coordinate system(s) - like lat,
lon, height and reference system for geospatial. And for outer space
some segmentation is likely.

The translation from object system coordinate system to display system
coordinate system happens with the LOD/visibility/active object
activation

Re: [vos-d] second draft requirements

2007-02-03 Thread chris
Hi Peter,

It's good to see you are adopting a plugin framework approach.

on meshes I was thinking you should support the triangular strip array
as that is a very efficient form for rendering. Also, there may be
another common mesh format worth supporting such as the one used in
Ogre3D.

For image formats, one of this biggest performance problems with X3D
is due to the size of images that are needed for textures. Better
compression support, such as jpeg2000 is pretty much essential.

On security, what can be done about network communications security -
e.g. encryption and authentication?

chris



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] thought problem 1: physics

2007-02-02 Thread chris
On 2/2/07, Benjamin Mesing [EMAIL PROTECTED] wrote:

  Question: will the two images of the two experiments show box2 in the
  same rest position relative to box1?

 Why don't we consider floating point precision issues as computers
 equivalent to Heisenbergs uncertainty principle?

:) It does seem like that sometimes,

chris

 Sorry, could help it ;-)

 Regards Ben


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Thought problem 2: physics 2

2007-02-02 Thread chris
On 2/3/07, chris [EMAIL PROTECTED] wrote:
 On 2/3/07, Reed Hedges [EMAIL PROTECTED] wrote:
  chris wrote:
   Of course... why not use a big integer for time?
 
  I would guess that lots of software does, especially since that's what
  most operating systems give you (e.g. time_t).
 
  
   A big integer at a fixed precision has larger relative error than a
   small one
 
  Why?
 because, although the resolution is even, the relative error = number/maxint.
oops that's not right - I was thinking in terms of how you calc error
for floating point :(
I need coffee ...

chris

 
 
  Also, ODE doesn't use any randomness does it?
 Well it might - I posted questions to the relevant forum but got not reply.
 however, if I repeat the same experiment at, say the origin, the
 results are repeatable.

 chris
 
 
  Reed
 
  ___
  vos-d mailing list
  vos-d@interreality.org
  http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d
 


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-01 Thread chris
On 2/2/07, Reed Hedges [EMAIL PROTECTED] wrote:

 Karsten and Chris are both right and have insightful comments.

thx Reed :)


 There's no real computational or memory restriction on the size of a
 volume of space *as a volume of space*  Chris is talking about the
 representation of coordinates.

 [[I.e. the only reason that a 1x1x1 kilometer space is different from a
 1x1x1 meter space is that the 3 numbers are bigger. It's not like every
 1x1x1 m cube within the 1x1x1 km space needs N bytes of RAM or anything :)]]

 In the past we've talked about the problems of resolution of large
 floating point numbers but never came to any solution for that per se,
 but perhaps to someday do automatic subdivision of the space into
 multiple sectors, whenever  a need for a tool like that comes up.  So
 you enter new subbordinate or nested coordinate systems as you move
 around.

subdivision of space is the most common approach but it does not give
a true continuous
world space to move around in and has a lot of overheads managing the
segments. There are also a multitude of special case problems that
occur at the boundaries: it can become
 a mess. Artificially managing this thru portals is ok for games but
does not suit all apps - like a virtual earth, for example.

 If you want to be able to see that whole galaxy in the rendering all at
 once that might be a bit of a challenge, but should be possible to
 figure out. (My guess is that graphics research has already discovered
 some solutions to this?)

The best combo of techniques from research IMHO is what I call
origin-centric techniques that build on the concept of a continuous
floating origin (in the client side display system), includes special
management of clip planes and LOD and a slightly different simulation
pipeline architecture end-to-end from server to client.  Plus stuff
like imposters for distant objects in galaxies.

Note since this is all the subject of my thesis I may be considered a
bit biased in this area :)

cheers,

chris

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Online Space

2007-02-01 Thread chris
On 2/2/07, Reed Hedges [EMAIL PROTECTED] wrote:
 chis wrote:
  The best combo of techniques from research IMHO is what I call
  origin-centric techniques that build on the concept of a continuous
  floating origin (in the client side display system), includes special
  management of clip planes and LOD and a slightly different simulation
  pipeline architecture end-to-end from server to client.  Plus stuff
  like imposters for distant objects in galaxies.
 
  Note since this is all the subject of my thesis I may be considered a
  bit biased in this area :)

 Oh, that's great! Please share your expertise.

I can refer you to some papers/presentations/videos etc. The rest will
have to come either from discussions like this or in later material I
write. But first I think I'll begin with posing some thought problems
that you can have a go at answering - I'll give the answers after with
images, code/whatever to back it up.


 So what are some of the requirements on the server/networking?
 (Generally speaking, if that's possible.)

i'll get to that later ...


 One thing that I'd like to have at some point is a way to enter
 another object/space; e.g. when flying around the solar system it's
 really a scale model of sorts, until you decide to descend to the
 surface of a planet.   I guess the planet is basically a hyperlink to
 another version of itself. Perhaps the transition could be triggered
 automatically by proximity too. (though that might be confusing or
 irritating to users.)

That's a nice idea but might be difficult. GeoVRML addressed this in a
different way:
they continuously scaled both avatar size and speed as you moved
towards planet surface
based on height above surface. That only suits some apps tho - like
what if u want to enter a space station and ur avatar was 10 times
bigger than the station!

In theory, scaling of the space (and objects) does not solve the
problem of navigation such large spatial extents because it just
scales the problems with it. however, I have experienced some benefits
in some cases that are currently unexplained.

Basically, I find the main things here are managing the origin, clip
planes (i.e. z buffer resolution), LOD. And if you look at O'Neil's
articles you will see some of this plus stuff on imposters for
planets, stars.

chris

 We humans perceive different levels of scale depending on what the
 objects in question actually are; we can make those levels of scale
 explicit and both integrate navigation of the world in the world itself,
 and avoid the scale/coordinate representation problems (and having to
 manually adjust your movement speed from warp factor 5 to mach to
 walk :)

 [Notice that we never specify what the units in VOS are. We can call
 them notrons in honor of an original collaborator in the project :)
 As a de-facto convention they would probably be meters in most worlds,
 and TerAngreal's default walking speed is roughly based on that, but
 they don't have to be meters if you don't want to.]

 Reed


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


[vos-d] Thought problem 2: physics 2

2007-02-01 Thread chris
Thought problem 2: physics 2

Suppose I am going to do a rigid body simulation. I put one box (box1)
on a plane, at the origin and hold another box (box2) suspended a
meter above the plane nearby. I release box2 at time t=20 and it
bounces, perhaps collides with box1 then eventually comes to rest. I
snap an image of the rest state of the sim.

Now I repeat the entire sim with viewpoint, boxes and plane in exactly the same
position as before. I drop box2 at time t=8000, let it bounce and snap an
image of the sim when it is at rest.

Question: will the two images of the two experiments show box2 in the
same rest position relative to box1?

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] SecondLife client goes Open Source

2007-01-10 Thread chris
On 1/9/07, Len Bullard [EMAIL PROTECTED] wrote:
 Letting out the viewer is something of a SOP.  I think the server-side is
 possibly more important given that there are any number of open source
 viewers out there for 3D platforms that are just as good or better.  It is
 the management of the server farm that makes the difference, that and a big
 budget for marketing.
yes the server side services *and* the networking: SL uses UDP, X3D
has little in the way of networking capability - direct from the
client. I have been trying for two years to get improved networking
and web services capability into X3D, but it is arduous. It became a
working group proposal a year ago and still have not got the WG
approved. The Consortium is slow to recognise what, I think, is
essential to its success.

chris


 Yes, I think they are looking at migrating the building market, but the only
 thing that brings in the bigCos is the site traffic.  Otherwise, to Sears,
 there is no advantage to being there.   IBM can talk a lot about boardroom
 VR but they are a services company in this market and without other
 companies willing to host on private farms, there is no market.

 There is a lot of puff in the online worlds market.  Of what value is it to
 own content that you can't move because it only works on that platform?  So
 like a Macintosh or a Mall, without a big membership that is actually going
 there often, having a presence there is largely a decorative bauble, a loss
 leader for being 'in the know'.  This market is relying on the naivete of
 the IT groups of the companies hosting there.

 The in-world economy is a fascinating experiment in waiting to see when the
 Feds will begin to look at it the same way they look at church bingo.  They
 tend to wait until the value is high enough that they can safely take their
 cut without killing the game.

 len

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 On Behalf Of Or Botton
 Sent: Monday, January 08, 2007 10:06 AM
 To: VOS Discussion
 Subject: Re: [vos-d] SecondLife client goes Open Source

 Granted, it was expected, but there is one major issue thats a big
 bad omen: And thats content copy protection.

 SecondLife has been largely tauted as a place where you can make a
 quick buck by creating and selling copies of content. This is
 mostly an artificial market created by placing DRM on objects - being
 able to flag a texture, model, script or an entire package as non
 copyable, modifyable or transferable.

 Personally, I am all for an opensource platform with no DRM involved.
 I believe that a VR platform can only become mainstream and
 widespread if it is open and free. But SecondLife's act is more self
 destructive because by nature they are not open and free.

 With the source out, it would be a rather easy task to duplicate
 models and textures of objects, pretty much breaking the DRM with a
 very casual effort from the programmer. This could be very damaging
 to their internal economy. Again, I do not support the concept of
 having virtual economies, but doing what they just did is more like
 shooting their own foot.

 Perhaps this signs that LindenLab now views the big gamers -
 companies and such as the real customers now? These people will have
 much less of an issue to enforce their copyrights then the regular
 person.




 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] SecondLife client goes Open Source

2007-01-10 Thread chris
On 1/11/07, Reed Hedges [EMAIL PROTECTED] wrote:
 chris wrote:
  On 1/9/07, Len Bullard [EMAIL PROTECTED] wrote:
  Letting out the viewer is something of a SOP.  I think the server-side is
  possibly more important given that there are any number of open source
  viewers out there for 3D platforms that are just as good or better.  It is
  the management of the server farm that makes the difference, that and a big
  budget for marketing.
  yes the server side services *and* the networking

 Cory Linden (I forget his real name) did a QA about open-sourcing the
 client, and someone asked him if they think that also open-sourcing the
 server would hurt SL from a business perspective, and he said No.  But
 I wonder if they do have plans on doing it.
Yes, that would be pretty interesting.


  X3D
  has little in the way of networking capability - direct from the
  client.

 It's true. X3D is basically silent about multiuser stuff.  The main way
 of doing anything like on-line changes is a kind of AJAX approach: use a
 script to send requests back to a CGI program on the server for more
 data to display in the scene. There's a function in the scripting API
 which basically does this, called createVrmlFromString (or
 createVrmlFromURL?) or something like that.

Ajax3D relies on the javascript/SAI (or EAI) API of the Web browser.
But you can do it all with straight http and a combo of loadURL and
createVrmlFromUrl. (as used on planet-earth.org and described in
http://planet-earth.org/sg05/sg05EvolutionaryAccident.html
http://planet-earth.org/sg05/presentation/EfficientHTTP.html
http://planet-earth.org/sg05/sg05planet-earthInstructions.html).
This uses just the two internal EAI/SAI calls and no issues with going
to the Web browser for its javascript API.

 People have made some very workable multiuser systems that used VRML
 heavily in the past, and at least one or two of those are still around
 and kicking (VNet2 and VR4All; Blaxxun's thing?) though the networking
 and multiuser aspects are not standard.

yes, all relied on the Web browser java script API, I believe and that
kept on being broken by one industry player or another: e.g. when
netscappe changed its javascript binding or when MS froze the java
support, etc. So portable solutions were not possible. It is proly a
similar situation today, but I would like to be shown wrong (e.g. show
me a SAI solution portable across web browsers).

 I think that VOS could be a good multiuser networking front end for a
 VRML or X3D world running on the server-- VRML and X3D are designed to
 describe a scene, with scripts and routes in it to describe animations
 and interactions.   The idea is to have that VRML system running on the
 server, and have that scene reflected in a set of A3DL VObjects.
are A3DL ajax 3D objects?

I agree somewhat: X3D, esp the XML coding, is best suited for
interchange. It is rediculous to expect applications to be programmed
in XML tho. VRML is better to program in, ditto for  X3D CLassic, but
python or something like that would be better.

 In designing the A3DL object model we took a few cues from VRML but we
 ended up trying to simplify and flatten things as well; some of the
 stuff that VRML does via extra nesting of nodes (like seperate Shape
 and Geometry nodes) we do via the polymorphic types of metaobjects.
ok, I'd have to look it up - I don't know anything much about VOS
except that it is basically something I'd like to see on the server
side as part of a Web3D app.

 Anyway, I stopped working on the VRMLServer a while ago to go work on
 Web stuff for a while. I hope to go back to it soon but if anyone wants
 to work on it it's in the source repository. It's in a halfway state
 where it can load a VRML file and create some objects, but it's buggy.
 It uses OpenVRML to load and run the VRML scene.
I want to look at collaborating on something like this in the future -
after my phd (a few months).

chris


 Reed


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] SecondLife geometry

2007-01-10 Thread chris
Someone answered on another thread that it is not. They do cutting but not CSG.

chris

On 1/11/07, Reed Hedges [EMAIL PROTECTED] wrote:

 Does anyone know if you can do boolean operations with prims?  I vaguely
 recall that you can apply a few cut out operations but not completely
 general prim-prim boolean ops??

 Reed



 Mark Wagner wrote:
  On 1/10/07, Peter Amstutz [EMAIL PROTECTED] wrote:
  One thing I've been wondering about -- while their primitives seem to be
  a pretty clever solution to the bandwidth problem, is their graphics
  architechture completely committed to being based on prims?  With the
  rest of the world being based on straight triangle meshes or surface
  patches, their graphics model seems to be completely unable to keep up
  with the state of the art in 3D graphics.
 
  One thing I like about the prim model is that it can be converted to
  raytracing and CSG with little to no effort.
 
  (Yes, you can raytrace meshes.  But they look like raytraced meshes.)
 


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] SecondLife client goes Open Source

2007-01-09 Thread chris
On 1/9/07, Or Botton [EMAIL PROTECTED] wrote:

 On Jan 9, 2007, at 1:08 PM, swe wrote:

  Textures look often shabby, most of their 'houses' look like boxes,
  their
  'trees' flow in the sky, not on ground.
  I guessed lots of people there, but so far I found round 20 in the
  start
  place, which cybertown had in better times too.

 I have to chip in into this one - Its true that most of the
 structures you've encountered may look pretty bad, but you have to
 remember that this is mostly because the creators of said buildings
 tend to be inexperienced users.

 The idea behind SL's live building system is to leverage creation to
 the point where any user who wishs to try it - professional or not -
 can build something without having to go through a long period of
 study. This does cause alot of low-quality and bad looking structures
 to populate the landscape, but its not because the engine cannot do
 any better.

 Feel free to look me up on the IRC channel and i'll show you around
 SL - some of the buildings built by the more professional artists may
 surprise you.

Yeah, some have spent the time to develop the skill and the content.
I saw some impressive SL stuff, which surprised me considering how
primitive their
primitive modelling components seem to be.

chris

 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d