Hi Matthias,
On Mon, 2004-04-05 at 14:59, Matthias Biedermann wrote:
>
> I'd like to render a rather high-poly object (ca. 50k-500k tris), which is
> built out of components of different sizes and tesselation levels into low-
> res render targets (32^2 - 256^2) for further processing. Reading the data
> from the VRML-file gives me a scenegraph containing the different parts of
> the object in each node, i.e. a group node that keeps all triangles of this
> part (correct me, if I'm wrong!).
Not quite. The VRML Loader uses MaterialGroups to simulate Shape nodes,
and has a Geometry below each of these. So for a flat VRML file you will
have one Group node at the top, which has a bunch of MaterialGroup
nodes, which have one Geometry each.
If you have a single node in your VRML file, you will get one Group with
one MaterialGroup and one Geometry.
> Thus, I suppose if the scene is rendered
> from viewpoints where only small fractions of one part (one node) are
> visible, any acceleration of the scenegraph-based view-frustum-culling will
> become rather small or will even slow things down.
Not sure what you mean here. Actually these are the situations where the
vf culling really helps, because only a small part of the graph is
rendered.
If you have a single node in your file you're right, vf culling works on
a node level and thus can't do anything.
> Would any further subdivision of the model-parts (e.g. BV-Hierarchy down to
> each triangle) pay off under these circumstances and/or would there be any
> "break-even"? How would such a partitioning be accomplished and smoothly
> integrated into the existing scenegraph?
You can use the SplitGraphOp, that subdivides all Geometry nodes that
have more than a threshold number of triangles. The break-even point
depends on your system and the model, as a (very rough) rule of thumb
I'd try between 1k and 10k triangles per node (don't forget to run a
StripeGraphOp afterwards).
Note that this actually changes the graph. There is no way to do this
hidden from the user.
> BTW: Has anybody used floating-point render targets with OpenSG yet? Is
> there a "common way" to integrate them or do you have to build it from
> scratch? Any other recommendations/pitfalls?
The OpenGL community has (finally!) come up with a useful proposal for
rendering to texture, see www.opengl.org "Request For Comment:
EXT_render_target proposal". Once this proposal is finalized and
implemented, this will make sense.
Currently there is no good way yet. I have just gotten some code that
uses Pbuffers, which AFAIK currently is a requirement for fp render
targets, but it's not integrated yet.
Has anybody else done that? How did you do it?
Thanks
Dirk
-------------------------------------------------------
This SF.Net email is sponsored by: IBM Linux Tutorials
Free Linux tutorial presented by Daniel Robbins, President and CEO of
GenToo technologies. Learn everything from fundamentals to system
administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click
_______________________________________________
Opensg-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/opensg-users