Hello Carsten,
>> 1. Is this infrastructure appropriate for handling cases like my >> line rendering problem? > uhm, probably, it's a bit hard to say, because the only things you have > mentioned about your algorithms are depth buffer and viewport > manipulation ;) Sorry, for that. Basically, I do have to handle 3 indpendent line rendering cases with the following preconditions: Given: face geometry + edge geometry (but no silhouettes) of the entities to be displayed in my scene 1. render face geometry and edge geometry + silhouette edges of the geometry 2. render edge geometry only + silhouttes but with the hidden lines removed 3. a) render edge geometry only + silhouttes but with the hidden lines rendered in a different style b) like a) but with face geometry rendered with transparency Algorithms: *) a) edge + hidden lines removed 1. Disable writing to the color buffer 2. Set depth function to GL_LEQUAL 3. Enable depth testing 4. Render polygons (with polygon offset) 5. Enable writing to the color buffer 6. Render edges b) edge + hidden lines in different style 1. Set depth function to GL_LEQUAL 2. Color buffer and depth buffer enabled 3. Set color and line style for hidden lines 4. Render edges 5. Disable color buffer 6. Render polygons 7. Set color and line style for visible edges 8. Render edges c) silhouettes 1. Clear stencil buffer to zero 2. Disable color and depth buffer 3. Set stencil function to always pass, and set the stencil operation to increment 4. Translate the object by +1 pixel in y-direction using glViewport and render polygons 5. Translate the object by -2 pixel in y-direction using glViewport and render polygons 6. Translate the object by +1 pixel in x-direction and +1 pixel in y-direction using glViewport and render polygons 7. Translate the object by -2 pixel in x-direction using glViewport and render polygons 8. Translate the object by +1 pixel in x-direction using glViewport 9. Enable color and depth buffer 10. Set stencil function to pass if the stencil value is 2 or 3. Possible values 0-4 11. Render rectanlge of viewport size to render the silhouettes I know of more 'modern' silhouette line rendering algorithms. The most modern one does use the geometry shader and does involve only one render pass. * But, I would like to first use the classic approach in order to support rather ancient hardware/driver platforms. I have two problems with these algorithms with respect to OpenSG: 1. The manipulation of the depth buffer in algorithm a) in case of transparent geometry 2. Currently I see no way for manipulation of the current viewport from a material chunk in algorithm b) *) Advanced Graphics Programming Using OpenGL pp. 382-388 Tom McReynolds, David Blythe Morgan Kaufmann, Elsevier, 2005 **) Single Pass GPU Stylized Edges, by P. Hermosilla & P.P. Vázquez IV Iberoamerican Symposium in Computer Graphics - SIACG (2009), pp. 1-8 F. Serón, O. Rodríguez, J. Rodríguez, E. Coto (Editors)). www.cgstarad.com/NPR/GSContours.pdf <http://www.cgstarad.com/NPR/GSContours.pdf> > In any case it is fairly flexible when it comes to > rendering to different render targets or performing multiple passes, so > hopefully should be able to do what you need. > The basic idea is to have a means to direct rendering to a different > render target (i.e. and FBO)... This is what I have in mind. However, currently, I do not know how to use it properly. So, I'm collecting as much information as I can get about this part of the OpenSG framework. I have looked into the implementation of the HDRStage for instance, but I have little knowledge about things like 'RenderPartitions' and of the rules I have to obey for stage implementations. Hence, I bluntly ask for some introduction into this specific topic in order to minimize a trial and error approach. >> 3.How and when should the transfer of the color information from >> the FBO inside of a SimpleStage derived class into the GL color buffer >> take place? > it does not happen automatically. A common case is that the color > attachment(s) of the FBO is a TextureBuffer and the generated texture is > then used by a material elsewhere in the scene. You can do a framebuffer > blit from the post render callback or issue a pass that renders a full > screen quad with the color attachments content as texture. Do you have a example at hand for the frame buffer blit operation? I would really be interested how you (or any other listner) would tackle the above problem within OpenSG. My basic intended structure looks like this | | Multiswitch | | +-----------------+---------------+------------------+ | | | | | | | | Polygon geomety edge geometry edge switch group silhouette stage | | +---------+---------+ | | | | hidden line hidden line removed styled stage stage Best, Johannes ------------------------------------------------------------------------------ All the data continuously generated in your IT infrastructure contains a definitive record of customers, application performance, security threats, fraudulent activity and more. Splunk takes this data and makes sense of it. Business sense. IT sense. Common sense. http://p.sf.net/sfu/splunk-d2dcopy1 _______________________________________________ Opensg-users mailing list Opensg-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/opensg-users