Hi Vince,
On Wed, 2006-02-08 at 16:28 +0000, Vince Jennings wrote:
> Hi,
>
> Sorry to post this again - I previously posted a couple of weeks ago
> (without response), but in light of Dirk having missed some posts I
> thought I might entice a reply... :)
>
> A year ago we implemented avatars in an OpenSG application, inheriting
> from OpenSG classes for our avatar classes.
>
> We encountered some problems with doing this that were due to OpenSG's
> structure, namely:
>
> 1) The geometry of the avatar classes could not be fully shared by
> instances of the avatar as each geometry node 'collected' by the
> renderAction needs to be in its individual deformed state in the
> rendering list. This is clearly not possible when sharing geometry so
> each instance requires at least its own vertex and normal fields,
> increasing the memory footprint and file size with each instance of an
> avatar (these are level of detail avatars with the highest level ~8000
> polygons).
The only way I see around that is doing the animation in a shader, or to
build synchronized groups of avatars that share data. But I don't see a
way around that in general, so it's not really an OpenSG-specific
problem unless I missed something.
> 2) When an animation such as a walk is applied to an avatar the avatar's
> actual location in the scene (relative to its parent Transform node) is
> not determined until the animation is processed on the avatar node , but
> the parent Transform is not aware of the animation and has to be
> explicitly corrected from the avatar node to maintain the correct
> bounding box position. This also results in the bounding box being one
> frame behind as the Transform has already been processed.
>
> We are now returning to do further work on this and wondered whether
> there are new features in OpenSG that might help with these problems, or
> if anyone has any suggestions for a cure/alternative approach. We could
> resolve this by taking over the rendering of our avatars in the scene
> ourselves, but that looks to introduce more problems than it solves.
You can get the Transform updated by just invalidating the bvolume of
your Avatar for every frame (Node::invalidateVolume()). That won't help
for being a frame late, though. The only way I see to fix that is to
artificially enlarge the bvolume so that it is big enough for whatever
movement happens in the next frame. That assumes that you know that, or
that the avatars have a limited motion speed. It will result in somewhat
less efficient culling, but that's much better than popping artifacts
due to bad bvolumes.
The real solution would be decoupling the animation from rendering by
having a separate update traversal. Given that traversals are not really
cheap in the current incarnation and there will be only very few nodes
that need it (in most apps, yours is probably different), I've shied
away from it so far. One of the (many ;) things I want to revisit for
2.x...
Yours
Dirk
-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems? Stop! Download the new AJAX search engine that makes
searching your log files as easy as surfing the web. DOWNLOAD SPLUNK!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users