Re: [osg-users] [osgPPU] FIX: osgPPU rendering not working in the first frame
Hello Art, glPushAttrib / glPopAttrib works as well, but it's not available in GLES and I also use osgPPU with GLES. However in GLES there are no 3D textures, only 2D and cube maps. Regards, --Alex -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=43939#43939 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] [osgPPU] Fix picture freeze for certain camera perspectives.
Hi, I had the problem in my application, that the picture was frozen for certain camera perspectives. This is caused due to the near far auto computation, that also takes the geometry of the osgPPU::Unit's into concern. For the faulty case, updateCalculatedNearFar(..) in CullVisitor::apply(Geode node) has returned false, because the osgPPU::Unit geode was considered completely behind the near plane. Please find attached fix for osgPPU::Unit that prevents it's geode from be taken into account for near far computation. Regards, --Alex ... Thank you! Cheers, Alexander -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=43140#43140 Attachments: http://forum.openscenegraph.org//files/unit_568.cpp ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [osgPPU] Near/Far clip planes: problem?
Hi, I had the same issue - please find my fix here: http://forum.openscenegraph.org/viewtopic.php?p=43140#43140 It prevents the osgPPU::Unit's from beeing taken into account for near/far computation. --Alex -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=43141#43141 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] Intel Core i5, Ubuntu: Bug with element buffer object with both byte und ushort indices !!
Hello, I have a quiet weird problem with Intel Core i5 integrated graphics under Ubuntu: When I have an element buffer object, that consists of an byte index array, followed by an ushort index array, wrong triangles will be rendered. (See wrong_rendering.png, compared to right_rendering.png). The attached osgt model file consists of two GL_TRIANGLES draw calls, that use the same buffer object for indices. DrawElementsUByte::draw size:300 offset: 0 DrawElementsUShort::draw size:861 offset: 300 I modified the code to draw only once of them - either the ubyte array or the ushort array and in both cases the right triangles are rendered, whereas rendering fails, when both draw calls are made. It looks like the driver interprets the ushort indices as byte indices, that would explain the fan looking triangles, that are drawn. The high byte (0) of an ushort index would then often be taken as vertex 0 for one of the three triangle vertices, so their would be many wrong triangles starting from vertex 0 to some other vertices - exactly, as it looks in the attached screen shot of the wrong rendering. The same model looks good on a system with Nvidia graphics, it also works on the Intel GPU, when the arrays reside in different buffer objects, or when element buffer objects are not used for the vertex indices. From the application side, everything looks good inside osg, so I think this is a driver bug. Does anyone who also has integrated Intel grahics under linux can confirm this bug? (To verify, just start the osgviewerapp with the attached model file). Would that samething for the Mesa guys to look at? Regards, --Alex My spec: Ubuntu 11.04 OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ironlake Desktop GEM 20100330 DEVELOPMENT x86/MMX/SSE2 OpenGL version string: 2.1 Mesa 7.10.2 OpenGL shading language version string: 1.20 VGA compatible controller: product: Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@:00:02.0 version: 02 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:42 memory:f000-f03f memory:e000-efff ioport:1170(size=8) CPU: product: Intel(R) Core(TM) i5 CPU 660 @ 3.33GHz vendor: Intel Corp. physical id: 5 bus info: cpu@0 version: 6.5.5 serial: 0002-0655---- slot: XU1 PROCESSOR size: 1199MHz capacity: 1199MHz width: 64 bits clock: 133MHz capabilities: x86-64 boot fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt aes lahf_lm ida arat tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: cores=2 enabledcores=2 id=1 threads=4 -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=42748#42748 Attachments: http://forum.openscenegraph.org//files/optosgt_211.gz http://forum.openscenegraph.org//files/right_rendering_125.png http://forum.openscenegraph.org//files/wrong_rendering_148.png ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] Intel Core i5, Ubuntu: Bug with element buffer object with both byte und ushort indices !!
Helllo Robert, we use OSG-3.0 and this time its not the alignment problem, that we had on the Intel EMGD before. The offset of the ushort array is 300 - so it is already dividable by 4, I also already tested with some other alignment values. With my tests, I verified that the content of the element buffer is valid for both draw calls, and if I have only one of the draw calls, the right triangles are rendered - it only fails, when both glDrawElements are called. I hacked in two lines in the DrawElementsUByte::draw() method for testing. So, this fails with rendering garbage triangles: glDrawElements(GL_TRIANGLES, 300, GL_UNSIGNED_BYTE, 0); glDrawElements(GL_TRIANGLES, 861, GL_UNSIGNED_SHORT, 300); but this: glDrawElements(GL_TRIANGLES, 300, GL_UNSIGNED_BYTE, 0); //glDrawElements(GL_TRIANGLES, 861, GL_UNSIGNED_SHORT, 300); or this: //glDrawElements(GL_TRIANGLES, 300, GL_UNSIGNED_BYTE, 0); glDrawElements(GL_TRIANGLES, 861, GL_UNSIGNED_SHORT, 300); works well and draws the right triangles of either the indices 0-299 or 300-1160. (Nothing else in the program is changed, so buffer contents is the same for all three test cases.) When each glDrawElements() is doing right, calling both glDrawElements() in direct sequence would be expected to do also right and to show the sum of each render calls, which it is not doing. So, in the meantime I will kick out the Intel graphics and go for a new Nvidia card ... --Alex robertosfield wrote: Hi Alex, Which version of the OSG are you using? Just prior to the OSG-3.0 release I checked in code that aligned arrays within a buffer object to 4 byte boundaries, this was done to address problems in Intel drivers. Robert. On Wed, Sep 14, 2011 at 1:35 PM, Alexander Irion wrote: Hello, I have a quiet weird problem with Intel Core i5 integrated graphics under Ubuntu: When I have an element buffer object, that consists of an byte index array, followed by an ushort index array, wrong triangles will be rendered. (See wrong_rendering.png, compared to right_rendering.png). The attached osgt model file consists of two GL_TRIANGLES draw calls, that use the same buffer object for indices. DrawElementsUByte::draw size:300 offset: 0 DrawElementsUShort::draw size:861 offset: 300 I modified the code to draw only once of them - either the ubyte array or the ushort array and in both cases the right triangles are rendered, whereas rendering fails, when both draw calls are made. It looks like the driver interprets the ushort indices as byte indices, that would explain the fan looking triangles, that are drawn. The high byte (0) of an ushort index would then often be taken as vertex 0 for one of the three triangle vertices, so their would be many wrong triangles starting from vertex 0 to some other vertices - exactly, as it looks in the attached screen shot of the wrong rendering. The same model looks good on a system with Nvidia graphics, it also works on the Intel GPU, when the arrays reside in different buffer objects, or when element buffer objects are not used for the vertex indices. From the application side, everything looks good inside osg, so I think this is a driver bug. Does anyone who also has integrated Intel grahics under linux can confirm this bug? (To verify, just start the osgviewerapp with the attached model file). Would that samething for the Mesa guys to look at? Regards, --Alex My spec: Ubuntu 11.04 OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ironlake Desktop GEM 20100330 DEVELOPMENT x86/MMX/SSE2 OpenGL version string: 2.1 Mesa 7.10.2 OpenGL shading language version string: 1.20 VGA compatible controller: product: Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@:00:02.0 version: 02 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:42 memory:f000-f03f memory:e000-efff ioport:1170(size=8) CPU: product: Intel(R) Core(TM) i5 CPU 660 @ 3.33GHz vendor: Intel Corp. physical id: 5 bus info: cpu@0 version: 6.5.5 serial: 0002-0655---- slot: XU1 PROCESSOR size: 1199MHz capacity: 1199MHz width: 64 bits clock: 133MHz capabilities: x86-64 boot fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx
[osg-users] [osgPPU] osgPPU with GLES 2.0 ?
Hello, did anyone try to use osgPPU with GLES 2.0 ? Would it be difficult to do the adaption for ES? Which parts of the library might be affected? Regards, --Alex -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=42524#42524 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
Re: [osg-users] [vpb] Creating .osga File
Hi! I'm also using osg_archive to create an archive afterwards the computation with osgdem. You have to process all the .ive files and replace the names to the referenced sub titles and images. They must begin with the archive's name. e.g: myarchive.osga/tileXYZ.ive or myarchive.osga/tileXYZ.png You can write a little program, that reads all the input files replaces the names and writes them out again. I recently also had much trouble with using .osga in conjunction with .osgb.gz and .osgb.dds.gz. I submitted my fixes here, but there is no response yet: http://forum.openscenegraph.org/viewtopic.php?p=33719#33719 --Alex -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=33777#33777 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] [osgPlugins] osgt can't read double matrices
Hi, when I try to to read a .osgt file, that contains a Matrixd I get the following warnings printed on the console: AsciiInputIterator::readProperty(): Unmatched property Matrixd, expecting Matrix f I had a look into the code and saw, that _useFloatMatrix in InputStream is always TRUE, when reading a .osgt file. It can only be set to FALSE, when a .osgb file is read. In MatrixSerializer::readMatrixImplementation then, the double matrix from the file will be read as float matrix and then converted back to double, ofcourse with loosing the double precision. Wouldn't it be better, when the matrix type in the file is checked and then converted into the type, that is demanded? Thank you! Cheers, Alex -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=32085#32085 ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
[osg-users] [vpb] Black chroma keying - bug or feature?
Hi, I use osgdem with a dummy tile of height zero and the --whole-globe flag set to get the whole earth generated. If I add a texture with parameter -t the black parts of the texture will be translucent and in the resulting texture the color (light grey) of my dummy tile is used instead. See the result in the attached picture srtm_L3_X3_Y3_incorrect.jpg. I found the reason for this in the following if statement in line 796 of SourceData.cpp: … else if (sourceColumnPtr[0]!=0 || sourceColumnPtr[1]!=0 || sourceColumnPtr[2]!=0) { destinationColumnPtr[0] = sourceColumnPtr[0]; destinationColumnPtr[1] = sourceColumnPtr[1]; destinationColumnPtr[2] = sourceColumnPtr[2]; if (destination_hasAlpha) destinationColumnPtr[3] = 255; } … The if statement causes the color value only be taken, if it is not black (r=0, g=0, b=0). Commenting out the condition yields in the correct result, shown in picture srtm_L3_X3_Y3_correct.jpg. Now, I wonder what the purpose of this if statement was? Regards, Alex -- Read this topic online here: http://forum.openscenegraph.org/viewtopic.php?p=30688#30688 Attachments: http://forum.openscenegraph.org//files/srtm_l3_x3_y3_correct_163.jpg http://forum.openscenegraph.org//files/srtm_l3_x3_y3_incorrect_598.jpg ___ osg-users mailing list osg-users@lists.openscenegraph.org http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org