Hi:
 
I'd like some advice on whether or not to compile my scene graph, and optimizations in general.
 
My visualization application must manage up to 10,000 "icons", which are represented as LineStripArrays. There are up to 6 (data driven) line segments per icon, so that's 60,000 lines to be drawn each time the display needs to be updated. Updates happen frequently, as different data is mapped onto line segments, or the number of line segments per icon changes. Icons may or may not be drawn at all, depending on user interactions. Thus, the geometry changes often.
 
I am currently building the scenegraph from the top down, and must represent each icon as an individual Shape3D node, since I may only display a subset of all icons. All icons exist at the same depth in the scenegraph, and are parented by a Switch Node that governs which ones are displayed. I've tried copying data to the S3D nodes, and also tried updating S3D's BY_REFERENCE, with similar results - I only get "interactive" responses at < 1,000 icons. It seems like the callback overhead to my local geometries negates any gain from not copying data...
 
I only compile the scenegraph when a new set of geometries are added to the scenegraph (when a new data set is loaded) or a new visual representation is added (I can draw color icons as well as line icons). Does it make sense to compile, since my geometries are anything but static? Furthermore, should I be trying out immediate mode rendering, since my scene graph is very flat and my geometry changes often?
 
I currently redraw all icons for scaling and minor translation operations. I am planning on pushing out a TransformGroup node to handle these operations - How bad is the perfomance hit in updating several thousand Transform matrices versus recomputing and copying data into the S3D nodes?
 
thanks in advance!

jp

 

Reply via email to