Re: [osg-users] Open Asset Import Library
I've just tried to load Collada object with materials with the help of your assimp plugin, and it worked fine. However, blend files created with Blender 2.64 didn't produce any visual output. So it turns out assimp isn't the holy grail :| ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Projective Multi-Texturing
On Thu, Oct 25, 2012 at 9:32 PM, Glenn Waldron gwald...@gmail.com wrote: I would try multipass rendering first. It is likely to be the slowest, but also probably the easiest to implement and you don't have to worry about exceeding your hardware limits. This falls into the category of just get it working and then worry about optimizing it later if necessary. Who knows -- maybe the performance will be acceptable. How would I go about this? I assume I'd need to enable different textures on each pass. How the pass the UV coordinates (as different sets)? Best, Christoph ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Rotation animation
Hi Robert, thank you again for your reply. I will try to do it with a custom callback as you suggest. Let's see if I can manage to get the behaviour I want. Best regards. -Original Message- From: osg-users-boun...@lists.openscenegraph.org [mailto:osg-users-boun...@lists.openscenegraph.org] On Behalf Of Robert Osfield Sent: jueves, 25 de octubre de 2012 16:42 To: OpenSceneGraph Users Subject: Re: [osg-users] Rotation animation Hi Héctor, On 25 October 2012 13:35, Héctor Martínez hector.marti...@sensetrix.com wrote: Then it seems that the only way to concatenate animations is by creating a custom UpdateCallback, right? Do you know any example about this that could help me to develop my own callback? You can nest transform nodes and attach a seperate callback to each one, but this may well not be what you are after. concatenate animations is such an open ended term that only you really know what you are after. I have also two more questions about animation: - What is the addNestedCallback function used for? The OSG doesn't use a concept of pre and post traversal callbacks for each of the different traversal but instead uses a scheme where multiple callbacks can be nested within each other. The advantage of this approach is that it makes it much easier to manage local state in a thread safe way and to control traversal. - Is it possible to have different animations attached to the same node and play only one at a time? The AnimationPathCallback doesn't support this but there is nothing to stop you from implementing your own callback to do this, or assign different paths at different times. Again I'd encourage you to roll your sleves up and code youself a custon update callback to do the animation exactly the way you need to. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Projective Multi-Texturing
On Fri, Oct 26, 2012 at 6:17 AM, Christoph Heindl christoph.hei...@gmail.com wrote: On Thu, Oct 25, 2012 at 9:32 PM, Glenn Waldron gwald...@gmail.com wrote: I would try multipass rendering first. It is likely to be the slowest, but also probably the easiest to implement and you don't have to worry about exceeding your hardware limits. This falls into the category of just get it working and then worry about optimizing it later if necessary. Who knows -- maybe the performance will be acceptable. How would I go about this? I assume I'd need to enable different textures on each pass. How the pass the UV coordinates (as different sets)? Right. So for multipass in general, you have a root node; then under that node there is one Group node per pass; then all these Group nodes share a common child (the geometry to render). Each per-pass Group node can hold a unique StateSet that assigns the proper texture, etc. Since you will also need to use the TexGen capabilities (discussed earlier in this thread) to generate the correct texture coordinates for each pass, you might use an osg::TexGenNode in place of the osg::Group. TexGen will generate the UV coordinates for you. You might also look at the osgFX::Effect class. This is a framework for rendering a subgraph multiple times (multipass), using a separate StateSet for each pass (just what I described above) so it might be a good fit, or at least good reference material. HTH. Glenn Waldron / Pelican Mapping / @glennwaldron ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Projective Multi-Texturing
On Fri, Oct 26, 2012 at 3:57 PM, Glenn Waldron gwald...@gmail.com wrote: Right. So for multipass in general, you have a root node; then under that node there is one Group node per pass; then all these Group nodes share a common child (the geometry to render). Each per-pass Group node can hold a unique StateSet that assigns the proper texture, etc. Very clever. Thanks for sharing Since you will also need to use the TexGen capabilities (discussed earlier in this thread) to generate the correct texture coordinates for each pass, you might use an osg::TexGenNode in place of the osg::Group. TexGen will generate the UV coordinates for you. I don't think so, as I already have the UV coordinates. I calculate them myself in order to allow for camera lens distortions and such. I think that TexGen (EYE_LINEAR) mentioned previously assumes a plain pinhole camera model without regarding effects from lens distortion. You might also look at the osgFX::Effect class. This is a framework for rendering a subgraph multiple times (multipass), using a separate StateSet for each pass (just what I described above) so it might be a good fit, or at least good reference material. Sounds good, just the name (Effect) puzzles me. From looking at the documentation it seems that actually a lot of effects such as outlines, glow etc are done with that. Best, Christoph ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Projective Multi-Texturing
Hi Christoph, On 26 October 2012 17:04, Christoph Heindl christoph.hei...@gmail.com wrote: Since you will also need to use the TexGen capabilities (discussed earlier in this thread) to generate the correct texture coordinates for each pass, you might use an osg::TexGenNode in place of the osg::Group. TexGen will generate the UV coordinates for you. I don't think so, as I already have the UV coordinates. I calculate them myself in order to allow for camera lens distortions and such. I think that TexGen (EYE_LINEAR) mentioned previously assumes a plain pinhole camera model without regarding effects from lens distortion. If you use your own shaders then you'll be able to compute the ST coordinates (what OpenGL calls the UV coords) and do something more sophisticated than standard OpenGL TexGen. For cases where the ST coordinates can't be computed in object or eye coordinates then you'll need to rely upon the UV coords that you've computed. The OSG supports as many texture units as the underlying OpenGL implementation supports and will scale up to 8 without problem on most hardware/drivers. Going this route would be the most straight forward. Sounds good, just the name (Effect) puzzles me. From looking at the documentation it seems that actually a lot of effects such as outlines, glow etc are done with that. I wouldn't recommend considering the osgFX library, it'd just overcomplicated things for no gain. Also while you could go the multi-pass route I would recommend it as the multi-texturing route should get you far enough along without needing to go for a more complicated multi-pass solution. The OSG supports multi-pass cleanly but it's always more complicated than using multi-texturing. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Projective Multi-Texturing
On Fri, Oct 26, 2012 at 12:04 PM, Christoph Heindl christoph.hei...@gmail.com wrote: On Fri, Oct 26, 2012 at 3:57 PM, Glenn Waldron gwald...@gmail.com wrote: Right. So for multipass in general, you have a root node; then under that node there is one Group node per pass; then all these Group nodes share a common child (the geometry to render). Each per-pass Group node can hold a unique StateSet that assigns the proper texture, etc. Very clever. Thanks for sharing Since you will also need to use the TexGen capabilities (discussed earlier in this thread) to generate the correct texture coordinates for each pass, you might use an osg::TexGenNode in place of the osg::Group. TexGen will generate the UV coordinates for you. I don't think so, as I already have the UV coordinates. I calculate them myself in order to allow for camera lens distortions and such. I think that TexGen (EYE_LINEAR) mentioned previously assumes a plain pinhole camera model without regarding effects from lens distortion. How are they stored? Per vertex? Or is it some kind of regular grid that does distortion correction? Is the computation something you could do in a vertex shader if the shader can access to the camera parameters? You might also look at the osgFX::Effect class. This is a framework for rendering a subgraph multiple times (multipass), using a separate StateSet for each pass (just what I described above) so it might be a good fit, or at least good reference material. Sounds good, just the name (Effect) puzzles me. From looking at the documentation it seems that actually a lot of effects such as outlines, glow etc are done with that. True; I just meant that it uses the same StateSet-per-pass idea. Glenn ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Projective Multi-Texturing
On Fri, Oct 26, 2012 at 12:34 PM, Robert Osfield robert.osfi...@gmail.comwrote: Hi Christoph, On 26 October 2012 17:04, Christoph Heindl christoph.hei...@gmail.com wrote: Since you will also need to use the TexGen capabilities (discussed earlier in this thread) to generate the correct texture coordinates for each pass, you might use an osg::TexGenNode in place of the osg::Group. TexGen will generate the UV coordinates for you. I don't think so, as I already have the UV coordinates. I calculate them myself in order to allow for camera lens distortions and such. I think that TexGen (EYE_LINEAR) mentioned previously assumes a plain pinhole camera model without regarding effects from lens distortion. If you use your own shaders then you'll be able to compute the ST coordinates (what OpenGL calls the UV coords) and do something more sophisticated than standard OpenGL TexGen. For cases where the ST coordinates can't be computed in object or eye coordinates then you'll need to rely upon the UV coords that you've computed. The OSG supports as many texture units as the underlying OpenGL implementation supports and will scale up to 8 without problem on most hardware/drivers. Going this route would be the most straight forward. Another idea is to use a TextureArray (GL_EXT_texture_array). This removes the limit on the number of textures, but adds the constraint that they all must be the same size and that you need GL 2.0. Sounds good, just the name (Effect) puzzles me. From looking at the documentation it seems that actually a lot of effects such as outlines, glow etc are done with that. I wouldn't recommend considering the osgFX library, it'd just overcomplicated things for no gain. Also while you could go the multi-pass route I would recommend it as the multi-texturing route should get you far enough along without needing to go for a more complicated multi-pass solution. The OSG supports multi-pass cleanly but it's always more complicated than using multi-texturing. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Problem with picking a manipulator in a HUD
Dear Robert, It has passed a week, and no, I have honestly not been looking at it further. I gave up in frustration. Life as a business administrator is a tiny bit software development, and tons of procurement, HR, finance and that kind of stuff. The overall goal is to create a custom dragger in the form of a thumbwheel that will sit in the edges of the viewer and control the navigation of the camera. It will end up in osgGeo once it is done. osgGeo has btw moved from github to osggeo.googlecode.com as git drove me crazy. If you know anyone who has done any thumb-wheel similar things, let me know. - Kristofer On Thu, Oct 25, 2012 at 11:23 AM, Robert Osfield robert.osfi...@gmail.com wrote: Hi Kristofer, On 19 October 2012 13:17, Kristofer Tingdahl kristofer.tingd...@dgbes.com wrote: I am trying to get a Dragger on my HUD display, but I am not able to manipulate it at all. If I add it the identical dragger to the normal scene, it works as expected. Any insight in this is appreciated as I have reached the end of trying to traverse the scene trying to gain understanding in this matter. I am just setting up Qt on my dev machine - I installed an new disk and OS but didn't pull in all extenal dependencies right away. Getting there though... Once I have Qt dev libs and headers installed I test out your example and have a look what might be going on. Without actual testing I can't say what might be amiss, I know that osgManipulator was original written for a 3D perspective window so perhaps there are assumption that don't map across. In principle I can't see a reason why using an orthographic view should prevent manipualtors from working, so if they don't this would be a bug/implementation limitation that should be addressed. As a nearly a week has passed since you posted you query, have you made any progress with understanding the issue at your end? Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org -- Kristofer Tingdahl, Ph. D. CEO dGB Earth Sciences ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] custom CompositeDragger and AntiSquish
Hi, Please disregard my first question about AntiSquish... I had a bug where a NodeCallback which mistakenly did not call traverse() was attached to a node above the Dragger. This meant the update traversal never reached the AntiSquish. However, I am still open to suggestions about my second question... how to make the 1D dragger handles' length scale with the box length, but remain a constant width in screen coordinates ... Thank you! Cheers, Michael -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=50819#50819 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] question about Optimizer and FLATTEN_STATIC_TRANSFORMS
lyceel wrote: Hi, Michael, The structure of the scene itself can sometimes prevent the transform from being flattened away. What happens if you try FLATTEN_STATIC_TRANSFORMS_DUPLICATING_SHARED_SUBGRAPHS (sometimes, you just need a bigger hammer :-) ). That worked, thanks. -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=50820#50820 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Problem with picking a manipulator in a HUD
Sorry, I did not see your reply. The reason for the current construction is simply copy-paste from an example I found online. We are indeed working with the composite viewer, so that route may be the best. So, to summarize, for the hud I would: 1. create a new camera. 2. Set the graphic context to the same osgQt::GraphicsWindowQt as the normal camera. 3. Create a new view for the hud camera, and add it to the common composite viewer. I'll look into this on Monday. On Thu, Oct 25, 2012 at 12:36 PM, Robert Osfield robert.osfi...@gmail.com wrote: Hi Kristofer, I have just tested your cube.cpp and see the problem with being able move the hud layer. I haven't drilled down into the code yet but I strongly suspect the issue is nesting a HUD Camera in the scene graph and how an intersection traversal will handle this case. My own inclination would be to place a HUD layer as a slave Camera in the viewer or use a CompositeViewer with a View for the 3D view and View for the HUD layer. This way the intersections can happen indepdently for each View/Camera from the top and less likely to conflict. Robert. ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org -- Kristofer Tingdahl, Ph. D. CEO dGB Earth Sciences ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Projective Multi-Texturing
On 10/26/2012 01:19 PM, Glenn Waldron wrote: For cases where the ST coordinates can't be computed in object or eye coordinates then you'll need to rely upon the UV coords that you've computed. The OSG supports as many texture units as the underlying OpenGL implementation supports and will scale up to 8 without problem on most hardware/drivers. Going this route would be the most straight forward. Another idea is to use a TextureArray (GL_EXT_texture_array). This removes the limit on the number of textures, but adds the constraint that they all must be the same size and that you need GL 2.0. From what I can tell, the main issue isn't the number of textures, it's the number of varying parameters between the vertex and fragment shader. If there are too many textures being applied at once (each with their own set of texture coordinates), it may be possible to run out of these, depending on what else is being interpolated between shader stages. On my machine (Geforce GTX 260): GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS=96 GL_MAX_VARYING_FLOATS=60 So, 96 textures, but only 60 floats for varying parameters. This seems like a lot, but if there are 64 textures on the mesh, it's not even close to enough. Of course, newer cards than mine probably have more... --J ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] MiniMap/PIP
Hi, So I am trying to create a mini-map/PIP. I have an existing program with scene that runs inside a Qt Widget. I have a class, NetworkViewer, which extends CompositeViewer. In NetworkViewer's constructor I call the following function. Notice the root is the scene which is populated elsewhere. Code: void NetworkViewer::init() { root = new osg::Group() ; viewer = new osgViewer::View( ); viewer-setSceneData( root ) ; osg::Camera* camera ; camera = createCamera(0,0,100,100) ; viewer-setCamera( camera ); viewer-addEventHandler( new NetworkGUIHandler( (GUI*)view ) ) ; viewer-setCameraManipulator(new osgGA::TrackballManipulator) ; viewer-getCamera()-setClearColor( osg::Vec4( LIGHT_CLOUD_BLUE_F,0.0f)); addView( viewer ); osgQt::GraphicsWindowQt* gw = dynamic_castosgQt::GraphicsWindowQt*( camera-getGraphicsContext() ); QWidget* widget = gw ? gw-getGLWidget() : NULL; QGridLayout* grid = new QGridLayout( ) ; grid-addWidget( widget ); grid-setSpacing(1); grid-setMargin(1); setLayout( grid ); initHUD( ) ; } The create camera function is as follows: Code: osg::Camera* createCamera( int x, int y, int w, int h ) { osg::DisplaySettings* ds = osg::DisplaySettings::instance().get(); osg::ref_ptrosg::GraphicsContext::Traits traits = new osg::GraphicsContext::Traits; traits-windowName = ; traits-windowDecoration = false ; traits-x = x; traits-y = y; traits-width = w; traits-height = h; traits-doubleBuffer = true; traits-alpha = ds-getMinimumNumAlphaBits(); traits-stencil = ds-getMinimumNumStencilBits(); traits-sampleBuffers = ds-getMultiSamples(); traits-samples = ds-getNumMultiSamples(); osg::ref_ptrosg::Camera camera = new osg::Camera; camera-setGraphicsContext( new osgQt::GraphicsWindowQt(traits.get()) ); camera-setViewport( new osg::Viewport(0, 0, traits-width, traits-height) ); camera-setViewMatrix(osg::Matrix::translate(-10.0f,-10.0f,-30.0f)); camera-setProjectionMatrixAsPerspective( 20.0f, static_castdouble(traits-width)/static_castdouble(traits-height), 1.0f, 1.0f ); return camera.release(); } I have been looking at several camera examples and searching for a solution for a while to no avail. What I am really looking for is the background being my main camera which takes up most of the screen and displays the scene graph while my mini-map appears in the bottom right. It has the same scene as the main camera but is overlaid on top of it and has its own set of controls for selection etc since it will have different functionality. I was thinking that perhaps by adding another camera as a slave I would be able to do this: Code: camera = createCamera(40,40,50,50) ; viewer-addSlave(camera) ; But this doesn't seem to work. If I disable the other camera I do see a clear area that it appears this camera was suppose to be rendering in (its viewport) but that doesn't help. I've played around with rendering order thinking it could be that to no avail. Any ideas? What it the best way to do such a minimap is? What am I doing wrong? Also anyway to make the rendering of the minimap circular instead of rectangular? Thanks, David -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=50823#50823 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org