Re: [vos-d] Animation comment

2006-12-02 Thread Peter Amstutz
In a lot of respects, streaming animation is actually much easier.  You 
just sample your input device, figure out how to move the model (that's 
where the magic happens) and send out a position update.  Playing back 
animation is harder because you need to store it, interpolate position 
and velocity between keyframes, and possibly blend multiple animations 
together.  Just moving your avatar around with the mouse could be seen 
as a form of streaming.

Controlling your avatar with esoteric input devices is beyond the scope 
of VOS for the moment.

On Fri, Dec 01, 2006 at 01:31:11AM -0700, James Wilkins wrote:
 I've been watching the VOS project for quite a while without commenting 
 or participating, beyond getting it to compile once.  I'm very good at 
 lurking.  ;)
 
 Anyway, have you considered supporting streaming animations in addition 
 to pre-recorded ones?  One thing I've noticed about existing 3d 
 environments is that making avatars move realistically is rather hard: 
 you have to have all desired motions available, find the correct ones 
 quickly, and activate the animations in the correct sequence and timing. 
 
 It would seem much more natural to capture real motion using various 
 devices like cameras, accelerometers, bend sensors, and so on, and 
 translate this motion into actions of the 3d avatar in real time.  It 
 would probably even be possible to translate simple mouse motions into 
 gestures more spontaneous than those available with pre-recorded animations.
 
 I realize that esoteric input devices are neither common nor readily 
 available, and are not likely to be for some time, but it would be nice 
 to have support for such built into VOS.

[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Animation comment

2006-12-02 Thread Peter Amstutz
On Fri, Dec 01, 2006 at 10:47:37PM -0800, Ken Taylor wrote:

 For immersion purposes, an approximate look direction is sufficient. But it
 seems like having a look-at command wouldn't be any simpler than just
 passing along a look vector . Notably, most FPS shooters now days actuate
 where each player is looking, and it does help to create a more believable
 playing experience. Actually, maybe the direction the head is looking
 isn't a good example, since it is pretty trivial (it really boils down to
 just one vector) -- something like arbitrary arm actuation would be more
 complex. Though, the case for arbitrary arm actuation is harder to make, as
 most people wouldn't have motion capture devices, and a collection of
 pre-animated gestures may be enough for basic body language and immersion
 purposes.

In s4, for avatars we basically cheated -- the avatar was a file in the 
.md2 (Quake 2!) format.  There was an actor interface that let you 
specify which animation loop the avatar should display.

This was done for expediency reasons, of course.  The problem with using 
an opaque 3rd party opaque file format is precisely because we couldn't 
do what you are talking about here -- we couldn't move the head to point 
look at where the user was looking, we couldn't have arbitrary 
articulated limbs, etc.

With regard to limb movement, one compromise that gives you limb 
movement without need special input is using inverse kinematics, so you 
can say, I want my guy to touch *here* and it moves the arm to touch 
that place.  I think Second Life recently introduced a feature like 
this?

 But the protocol should be forward-looking, and allow for total model
 actuation if the server and receiving clients are happy with that. But in
 the near future, most servers would probably be set up to say that's way
 too many movement vectors, buddy -- simplify it a little, and the animation
 model should be able to scale down as needed.

Like I said, a live data feed is actually easier in a lot of ways -- 
it's storing and playing back animation data which is hard.  Entirely 
solvable, but still hard.

[   Peter Amstutz  ][ [EMAIL PROTECTED] ][ [EMAIL PROTECTED] ]
[Lead Programmer][Interreality Project][Virtual Reality for the Internet]
[ VOS: Next Generation Internet Communication][ http://interreality.org ]
[ http://interreality.org/~tetron ][ pgpkey:  pgpkeys.mit.edu  18C21DF7 ]



signature.asc
Description: Digital signature
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Animation comment

2006-12-01 Thread Ken Taylor
It seems to me that streaming animations would be relatively easy in the VOS
system. You could simply set up a listener to changes in the position and
orientation of the vertices/nodes of the model that's animating, and update
the values locally as they come in. Similar to how avatar movement is
propagated.

I think the larger discussion of animations is referring to how to store a
sequence of known animation moves, with intent to replay them later, within
the VOS object model. And whether that representation could be re-used for
other purposes as well.

Someone correct me if I'm wrong :)

Ken

- Original Message - 
From: James Wilkins [EMAIL PROTECTED]
To: vos-d@interreality.org
Sent: Friday, December 01, 2006 12:31 AM
Subject: [vos-d] Animation comment


 I've been watching the VOS project for quite a while without commenting
 or participating, beyond getting it to compile once.  I'm very good at
 lurking.  ;)

 Anyway, have you considered supporting streaming animations in addition
 to pre-recorded ones?  One thing I've noticed about existing 3d
 environments is that making avatars move realistically is rather hard:
 you have to have all desired motions available, find the correct ones
 quickly, and activate the animations in the correct sequence and timing.

 It would seem much more natural to capture real motion using various
 devices like cameras, accelerometers, bend sensors, and so on, and
 translate this motion into actions of the 3d avatar in real time.  It
 would probably even be possible to translate simple mouse motions into
 gestures more spontaneous than those available with pre-recorded
animations.

 I realize that esoteric input devices are neither common nor readily
 available, and are not likely to be for some time, but it would be nice
 to have support for such built into VOS.


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Animation comment

2006-12-01 Thread S Mattison

It's a great idea in theory. Downright intriguing. I never thought of it
before.

But the question comes; How often do you choose to update it? What will each
user be missing, if one user gets a vector that tells them 'in this
timeframe, move this vertex through this vector', and the next user in line
gets something a little bit different, for their timeframe? And again, how
much load would it put on the server, if everyone is doing these dynamic
animations?

Surely, while playing halflife or countless other games with 'ragdoll
physics' built in... You think Man, a multiplayer game like this would be
great!... But would both players see the same thing? Just how much data
transfer would it require, if one ragdoll body was being shared, between
only two computers? How about ten, between two computers? Now how about ten,
between five computers? And think of the one server computer, that has to
send all that data, to all the others.

That poor, poor computer. =)
-Steve

On 12/1/06, Ken Taylor [EMAIL PROTECTED] wrote:


It seems to me that streaming animations would be relatively easy in the
VOS
system. You could simply set up a listener to changes in the position and
orientation of the vertices/nodes of the model that's animating, and
update
the values locally as they come in. Similar to how avatar movement is
propagated.

I think the larger discussion of animations is referring to how to store a
sequence of known animation moves, with intent to replay them later,
within
the VOS object model. And whether that representation could be re-used for
other purposes as well.

Someone correct me if I'm wrong :)

Ken

- Original Message -
From: James Wilkins [EMAIL PROTECTED]
To: vos-d@interreality.org
Sent: Friday, December 01, 2006 12:31 AM
Subject: [vos-d] Animation comment


 I've been watching the VOS project for quite a while without commenting
 or participating, beyond getting it to compile once.  I'm very good at
 lurking.  ;)

 Anyway, have you considered supporting streaming animations in addition
 to pre-recorded ones?  One thing I've noticed about existing 3d
 environments is that making avatars move realistically is rather hard:
 you have to have all desired motions available, find the correct ones
 quickly, and activate the animations in the correct sequence and timing.

 It would seem much more natural to capture real motion using various
 devices like cameras, accelerometers, bend sensors, and so on, and
 translate this motion into actions of the 3d avatar in real time.  It
 would probably even be possible to translate simple mouse motions into
 gestures more spontaneous than those available with pre-recorded
animations.

 I realize that esoteric input devices are neither common nor readily
 available, and are not likely to be for some time, but it would be nice
 to have support for such built into VOS.


 ___
 vos-d mailing list
 vos-d@interreality.org
 http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d





--
Steven Mattison - CTI Services Help Desk
(303) 745-3077

If you chase two rabbits, you will lose them both.
___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Animation comment

2006-12-01 Thread Lalo Martins
The trick here is divide and conquer.  VOS represents these objects with
a lot of granularity, so the updates that actually go trough the network
end up being quite small.

Also, the protocol has built-in support for updates, scheduling, and
invalidation.  If three updates to the same object are in the queue before
the first one gets actually sent, then only the most recent one will
actually get transmitted.

That's a very simplistic description, if you want to know the details
you'll have to study the source code :-)

I guess my point is, trust us; Peter and Reed have already worked years on
that part of the problem, and we're quite confident in the solution we
have.  In fact, the current discussion -- although that may not be
apparent to those new to the model -- is largely (but not entirely)
motivated by a desire for the animation representation that best makes use
of what VOS and VIP already can do well.

On Fri, 01 Dec 2006 03:29:47 -0700, S Mattison wrote:
 But the question comes; How often do you choose to update it? What will
 each user be missing, if one user gets a vector that tells them 'in this
 timeframe, move this vertex through this vector', and the next user in
 line gets something a little bit different, for their timeframe? And
 again, how much load would it put on the server, if everyone is doing
 these dynamic animations?
 
 Surely, while playing halflife or countless other games with 'ragdoll
 physics' built in... You think Man, a multiplayer game like this would be
 great!... But would both players see the same thing? Just how much data
 transfer would it require, if one ragdoll body was being shared, between
 only two computers? How about ten, between two computers? Now how about
 ten, between five computers? And think of the one server computer, that
 has to send all that data, to all the others.

best,
   Lalo Martins
--
  So many of our dreams at first seem impossible,
   then they seem improbable, and then, when we
   summon the will, they soon become inevitable.
--
personal:  http://www.laranja.org/
technical:http://lalo.revisioncontrol.net/
GNU: never give up freedom http://www.gnu.org/



___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Animation comment

2006-12-01 Thread Ken Taylor

Jonathan Jones wrote:
 James Wilkins wrote:
  Anyway, have you considered supporting streaming animations in addition
  to pre-recorded ones?  One thing I've noticed about existing 3d
  environments is that making avatars move realistically is rather hard:
  you have to have all desired motions available, find the correct ones
  quickly, and activate the animations in the correct sequence and timing.
 
 With the technology the end-user would likely have, the best way to do
 this is for the client to have a script that says
 when object x is moving, activate it's walking animation. The key to
 getting smooth, realistic animations is to do as
 *much* as possible client-side. However fast networks get, it will
 *always* be quicker to load something off the hard
 disk than the network connection.

A good compromise may be to have certain movements be activated by
higher-level scripting (such as walking animations), and others be fully
actuated (such as which direction the head is looking). Of course, as
motion-sensing VR type hardware becomes more common, more people will want
higher actuation in their avatars for immersion purposes. The amount of
real-time actuation to use should probably be configurable by the clients
and the servers. For instance, the client controlling the avatar can set up
how much actuation to send out on the network, the server running the space
can have a quota or limit of the amount of actuation bandwidth allowed per
client and the types of actuation allowed, and another viewing client can
tell the server what kinds of actuation and how much bandwidth it wants to
receive. This way, users with the bandwidth can have a rich experience,
while those with slower connections don't get totally left behind.

Ken


___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Animation comment

2006-12-01 Thread Jonathan Jones
Ken Taylor wrote:
 Jonathan Jones wrote:
   
 James Wilkins wrote:
 
 [streaming animations is good for syncronization]
   
 [do as much as possible client-side]
 

 A good compromise may be to have certain movements be activated by
 higher-level scripting (such as walking animations), and others be fully
 actuated (such as which direction the head is looking). Of course, as
 motion-sensing VR type hardware becomes more common, more people will want
 higher actuation in their avatars for immersion purposes. The amount of
 real-time actuation to use should probably be configurable by the clients
 and the servers. For instance, the client controlling the avatar can set up
 how much actuation to send out on the network, the server running the space
 can have a quota or limit of the amount of actuation bandwidth allowed per
 client and the types of actuation allowed, and another viewing client can
 tell the server what kinds of actuation and how much bandwidth it wants to
 receive. This way, users with the bandwidth can have a rich experience,
 while those with slower connections don't get totally left behind.

 Ken
   
Can I propose a change in nomenclature?

As has been pointed out before, we're talking about clients and servers,
but VOS is technically P2P, so I propose we talk
about fast-side and slow-side, or local and remote. Hopefully this
conveys what we mean by client and server, but fits in
better with the P2P architecture.

I don't really see why most peers need to know *exactly* where someone
is looking, if you have a couple of defined
look-positions, and then a look-at (object), most of the hard stuff can
be done fast-side.

 -sconzey


begin:vcard
fn:Jonathan Jones
n:Jones;Jonathan
email;internet:[EMAIL PROTECTED]
version:2.1
end:vcard

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d


Re: [vos-d] Animation comment

2006-12-01 Thread S Mattison

As long as we can get the VOS guys to realize that We're all on the local
side.
TerAngreal is Local. OmniVos server and everything it transfers is
Remote.

Sounds like a good convention otherwise. =)

On 12/1/06, Jonathan Jones [EMAIL PROTECTED] wrote:


 Can I propose a change in nomenclature?

As has been pointed out before, we're talking about clients and servers,
but VOS is technically P2P, so I propose we talk
about fast-side and slow-side, or local and remote. Hopefully this conveys
what we mean by client and server, but fits in
better with the P2P architecture.

I don't really see why most peers need to know *exactly* where someone is
looking, if you have a couple of defined
look-positions, and then a look-at (object), most of the hard stuff can be
done fast-side.

___
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d