Hi,

On Sat, 2009-08-01 at 10:27 -0500, Carsten Neumann wrote:
>       Hello Gerrit,
> 
> >>> I really would like to see only one instance in there.
> >> sure, that is fine with me. Can you give a description of the parts of 
> >> that system and what roles they play?
> > 
> > basically they all follow the VRML/X3D model. You have your time sensor 
> > and interpolator elements and connect the via field connectors (aka
> > routes). Let me take this part out of the CSM dir where it lives right
> > now and repackage this into a separate contrib dir to untangle things a
> > little.
> 
> hm, the interpolators unfortunately combine the constant data for the 
> animation with the changing data for playback. That means if I need to 
> play the same animation (started at different times) for two different 
> characters I need to duplicate the whole keyframe data, or am I missing 
> something ?

no, that's how VRML/X3D does animations, which from experience is not
necessarily the best way to do it ;) The current thing started as a
simple extension for testing. It by no means is the only/correct
way to do it or the way it should be done. 

What I was getting at, maybe not in the clearest form, is to have the
same basis for things and going from there. And to separate things
which I see as not necessarily belonging together.

Concrete, I would like to agree on a way to handle frame functions and
(global / wall) time. 

Similar,  I would like to have one set of interpolators. If you need
them in a different form, no problem to split the current one or find a
way to extend them by adding simple adaptor classes that only contain
the cursor into the data along the lines of your design. I could
even live with two sets if the use the same mechanisms to get there
inputs and provide their outputs so they tie in with the rest of the
system.

And I would like to have the data flow modeling split from it's usage,
e.g. like it is right now with field connectors not being tied to
anything particular. But they are currently tied into the changed
handling so it might be worth thinking of something else. 


> That is why there is the difference between an AnimationTemplate (with 
> ATracks) and an Animation (with AChannels), one just stores data, the 
> other is a "cursor" into that data - similar to how Cal3d splits things 
> with its Core and non-Core types.

makes sense.

> >> We specifically need to handle key frame animation for vertex skinning 
> >> for characters. Any hints how existing parts are best used/extended to 
> >> support that are also very welcome.
> > 
> > If you can work with the std vrml/x3d interpolators
> > (IIRC pos/scalar/ori/coordinate), which I hope as skin+bones are in
> > both, the basics should all be there. 
> 
> yes, a bone is essentially just a coordinate system. One thing to 
> consider though: there is often more than one animation applied to a 
> skeleton. For that case you need to accumulate all input for one bone in 
> some way, either by keeping track if this is the first change to a bone 
> in this frame and making it absolute and all subsequent ones relative or 
> accumulating all changes into a temporary and then set it once all 
> animations are applied.

ok, basically you need something to mix n input streams into one output
stream. Hmm, ideally I would like to find a general form for this and
not necessarily tie it to any of the animation classes.

> > The two tricky bit's left are the global frame handler which updates the
> > time and something that makes sure the skin+bone stuff is evaluated only
> > once a frame. For the frame handler I have to see if we can handle it
> > like the vrml loader which can be extended so it can live outside the
> > osg core (I don't want to have fileIO to dependent on a contrib lib)
> > or if we have to push this into the core.
> 
> agreed on the dependency. Why does the frame handler need to be 
> extensible (perhaps it must be, I just don't understand the reason yet)? 

sorry that was not to clear, it was the other way around, the file
handler was the extensible thing. In VRML we can handle the animation
nodes not in the main fileIO lib part but in a contrib lib. So currently
the TimeSensor is aware that it is inside CSM and can directly access
parts of it.

> Grepping for framehandler only turned up the call from the CSMGLUTWindow 
> to CSM::frame, which seems to only update time and trigger the 
> SensorTask/TimeSensors.

yes, plus triggering the drawing in the end. Was there more you
expected ?

> For time I think there needs to be a way that the user can supply it, in 
> case there is other stuff in the application that has to run off the 
> same clock (maybe that is already possible?).
>
> For example in a VRJuggler clustered app we'd like to feed time from a 
> device into the animation since it is guaranteed to be in sync on all nodes.

Right now CSM does not allow it (IIRC), but adding a callback instead
of the fixed getSystemTime call should not be a problem. 

> > Short question, what is the grouping (for example AnimationTemplate)
> > for ?. Just to deal with a complex animation through one object ?
> 
> yes, primarily. Given that a human character model has 20-30 bones I 
> consider it somewhat essential that I can start the "walk" animation
> with just one call. Animation (the playback object for an ATemplate) is 
> also the level where the time scale, the playback mode (once, loop, 
> swing) and direction (fwd, bwd) are set.

So it is basically the TimeSensor.

kind regards,
  gerrit





------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to