Hello Gerrit,

Gerrit Voß wrote:
> On Sat, 2009-08-01 at 10:27 -0500, Carsten Neumann wrote:
>> hm, the interpolators unfortunately combine the constant data for the 
>> animation with the changing data for playback. That means if I need to 
>> play the same animation (started at different times) for two different 
>> characters I need to duplicate the whole keyframe data, or am I missing 
>> something ?
> 
> no, that's how VRML/X3D does animations, which from experience is not
> necessarily the best way to do it ;) The current thing started as a
> simple extension for testing. It by no means is the only/correct
> way to do it or the way it should be done. 

ok.

> What I was getting at, maybe not in the clearest form, is to have the
> same basis for things and going from there. And to separate things
> which I see as not necessarily belonging together.

that is a goal that is way to reasonable not to agree to ;) ;)

> Concrete, I would like to agree on a way to handle frame functions and
> (global / wall) time. 

is a frame function just something that gets called every frame or is 
there more to it? One direction I would like to stay away from is 
providing an actual application frame work, because you can normally 
only have one of these (so using OpenSG with VRJuggler would become harder).
What would be fine is if there is one function the app has to call per 
frame and from there all the rest is triggered, pretty much like we now 
have Window::render.

> Similar,  I would like to have one set of interpolators. If you need
> them in a different form, no problem to split the current one or find a
> way to extend them by adding simple adaptor classes that only contain
> the cursor into the data along the lines of your design. I could
> even live with two sets if the use the same mechanisms to get there
> inputs and provide their outputs so they tie in with the rest of the
> system.

ok, makes sense, I need to look a bit more at the things in the CSM to 
get a better idea how to unify this.

> And I would like to have the data flow modeling split from it's usage,
> e.g. like it is right now with field connectors not being tied to
> anything particular. But they are currently tied into the changed
> handling so it might be worth thinking of something else. 

hm, sorry I have a bit of trouble following you here (you are saying you 
want things separated and the connectors are an example for that, but 
then you say they are tied to the changed callback ;) ). Can you say a 
bit more about how you want the data flow done (or if its easier how you 
don't want it)?

>> For that case you need to accumulate all input for one bone in 
>> some way, either by keeping track if this is the first change to a bone 
>> in this frame and making it absolute and all subsequent ones relative or 
>> accumulating all changes into a temporary and then set it once all 
>> animations are applied.
> 
> ok, basically you need something to mix n input streams into one output
> stream. Hmm, ideally I would like to find a general form for this and
> not necessarily tie it to any of the animation classes.

maybe an N to 1 field connector, that passes on values after all N 
sources have fired ?

>> agreed on the dependency. Why does the frame handler need to be 
>> extensible (perhaps it must be, I just don't understand the reason yet)? 
> 
> sorry that was not to clear, it was the other way around, the file
> handler was the extensible thing.

oh, ok.

> In VRML we can handle the animation
> nodes not in the main fileIO lib part but in a contrib lib. So currently
> the TimeSensor is aware that it is inside CSM and can directly access
> parts of it.

how about taking the lazy way here and just create an animation lib and 
have file io depend on that?

>> Grepping for framehandler only turned up the call from the CSMGLUTWindow 
>> to CSM::frame, which seems to only update time and trigger the 
>> SensorTask/TimeSensors.
> 
> yes, plus triggering the drawing in the end. Was there more you
> expected ?

not really, that was mean to explain why I didn't understand where you 
need extensibility - but that's all clear now.

>> For example in a VRJuggler clustered app we'd like to feed time from a 
>> device into the animation since it is guaranteed to be in sync on all nodes.
> 
> Right now CSM does not allow it (IIRC), but adding a callback instead
> of the fixed getSystemTime call should not be a problem. 

yeah, just checking if it was lurking somewhere already ;)

>>> Short question, what is the grouping (for example AnimationTemplate)
>>> for ?. Just to deal with a complex animation through one object ?
>> yes, primarily. Given that a human character model has 20-30 bones I 
>> consider it somewhat essential that I can start the "walk" animation
>> with just one call. Animation (the playback object for an ATemplate) is 
>> also the level where the time scale, the playback mode (once, loop, 
>> swing) and direction (fwd, bwd) are set.
> 
> So it is basically the TimeSensor.

yes.

        Cheers,
                Carsten

------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day 
trial. Simplify your report design, integration and deployment - and focus on 
what you do best, core application coding. Discover what's new with 
Crystal Reports now.  http://p.sf.net/sfu/bobj-july
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to