Hi there,

I'd like to render a rather high-poly object (ca. 50k-500k tris), which is built out of components of different sizes and tesselation levels into low- res render targets (32^2 - 256^2) for further processing. Reading the data from the VRML-file gives me a scenegraph containing the different parts of the object in each node, i.e. a group node that keeps all triangles of this part (correct me, if I'm wrong!). Thus, I suppose if the scene is rendered from viewpoints where only small fractions of one part (one node) are visible, any acceleration of the scenegraph-based view-frustum-culling will become rather small or will even slow things down.

Would any further subdivision of the model-parts (e.g. BV-Hierarchy down to each triangle) pay off under these circumstances and/or would there be any "break-even"? How would such a partitioning be accomplished and smoothly integrated into the existing scenegraph?


BTW: Has anybody used floating-point render targets with OpenSG yet? Is there a "common way" to integrate them or do you have to build it from scratch? Any other recommendations/pitfalls?



Thanks a lot for your replies,
Matthias






-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users

Reply via email to