Thanks Farshid, that explains it all.
Farshid Lashkari wrote:
Hi Fred,
OSG does compute the correct near/far values for pre-render cameras, assuming
it is enabled for it.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=54463#54463
Hi,
I have two different programs, the first one of which uses GL_PATCHES
primitives and has program-setPatchParameter(GL_PATCH_VERTICES) set to 1. The
second program does not specify a value as it does not use GL_PATCHES.
I see PatchParameter called at link time with a value of 1 for the first
Hi Sebastian and aperuggi,
I too am tempted to revive this old thread. My observation regarding the Z
near/far values for a PRE_RENDER camera is that they are always inherited from
the master camera. Like you, I have the following scene graph:
MainCamera
|
+--PreRenderCamera - RELATIVE_RF
Robert,
Sorry to further fill in this thread with a question - but couldn't it be that
the OSG indeed correctly calculates the near/far values for my pre_render
camera, and that I simply just do not know (yet) a proper way to correctly
retrieve the projection matrix? I see the CullVisitor
Hi,
I can render my scene graph fine when declared under the master camera in my
viewer.
What I am focused on is the Z far and near values, I must ensure they're good -
I can inspect the view and projection matrix, render the viewing frustrum (as
lines for instance), and observe that the Z far
control, in particular the
near/far computation.
Robert.
On 31 May 2013 21:54, Fred Smith wrote:
Hi,
I can render my scene graph fine when declared under the master camera in
my viewer.
What I am focused on is the Z far and near values, I must ensure they're
good - I can
There might be one wrong thing.
When creating the slave camera, I set it up as a child of my master camera.
Should I used osg::View::addSlave(Camera) instead, and leave my master camera's
scene graph empty?
Sorry if this question has been answered before I am all mixed up with regards
to the
Hi,
Anybody working on basic compute shader integration? Just the new shader type
and a way to kick off the job using DispatchCompute would be awesome.
Might look into this next month if nobody is up for it.
Cheers,
Fred
--
Read this topic online here:
Hi,
I have problems accessing SVN repositories today using a 1.6.7 client (! maybe
I should upgrade). On a side note, I can't access www.openscenegraph.org either.
Cheers,
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=49623#49623
Thanks for the heads up. I have set up the git repository.
Cheers,
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=49636#49636
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Thanks, I will just do this.
I'm a bit fuzzy as to what is or isn't part of the state in OSG (osg::StateSet,
things deriving from osg::StateAttribute, which buffer objects aren't). I guess
the rule is that everything that IS part of the state in OpenGL also is part of
the state in OSG.
Fred
Hi,
I have some geometry that is making use of a VBO. The VBO contains an extremely
large number of vertices. Similarly, I have an element buffer (EBO) that
contains a large amount of indices. I need to render some objects that
reference selected vertices of the VBO and selected indices of the
Hi David,
Thanks for the quick reply!
ledocc wrote:
I already have to do this kind of optimization, I have used a NodeVisitor to
traverse
the graph and assign VBO and EBO to geometries.
Sounds like you are reusing the same VBO/EBO for all geometries. Did you check
that OSG is indeed not
Hi,
No problem building for me here on Windows 7, VS2010 SP1, 32-bit build, default
CMake options + BUILD_OSG_EXAMPLES.
Cheers,
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=39936#39936
___
robertosfield wrote:
Hi Fred,
On Sat, May 28, 2011 at 1:18 PM, Fred Smith wrote:
On Windows XP or 7, AMD or nvidia hardware, they *are* causing a huge leak.
Ever head of driver problem??? Go try another OS, Go try another type
of hardware, Go try another driver, Go try a memory
robertosfield wrote:
...both of which are attached,
I run both tests :
fbotest --testRTT
And
fbotest
And both run without problems and without memory growth, there both
seem fine, despite be pretty dire ways to drive the OSG, both seem to
not cause any problems.
But they do.
Hi,
Will this issue likely be addressed in the near future? I guess only somebody
relatively experienced with the OSG code base can dig into this.
I can test the code quite extensively as I have routines that process a lot of
data. Right now I'm stuck with release 2.9.10. Not absolutely sure
robertosfield wrote:
Hi Fred,
Please try the latest updates to svn/trunk, it may or may not address
the issues you have seen. If it doesn't please put together a small
example that reproduces the problem.
Robert.
robertosfield wrote:
Hi Fred,
On Fri, May 27, 2011 at 1:12 PM, Fred
Attached is a cleaned up, less messy version of the repro.
testRTTCamera() shows off the leak with offscreen rendering using a slave camera
testLeak() shows off the GraphicsContext leak.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=39855#39855
Hi J.P.,
I'll try running osgmemorytest, but I don't have the problem with OSG 2.9.10.
Cheers,
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=39526#39526
___
osg-users mailing list
dglenn wrote:
Jason Daly wrote:
On 05/18/2011 05:14 AM, Fred Smith wrote:
Hi,
The only problem that I personally heard about is about the slow transfer
speed to/from GPU memory on Fermi cards. DMA transfers are said to be
slower than on GTX 2xx hardware. I haven't seen
Hi,
Here is a repro of the slave camera problem (the original problem, not the
bounding box stuff).
Increase the number of testRTTCamera() iterations to see the problem better.
It seems there is something wrong with respects to how GL objects are released,
as the program is stuck after a
Hi,
The only problem that I personally heard about is about the slow transfer speed
to/from GPU memory on Fermi cards. DMA transfers are said to be slower than on
GTX 2xx hardware. I haven't seen anything related to computing.
Cheers,
Fred
--
Read this topic online here:
robertosfield wrote:
I didn't even realize there was a glVertexAttribIPointer... so yes
this does sound like it will be the issue. Either
osg::Geometry::drawImplementation or osg::State will need to be
adapted to detect the use of the interger array and use the
glVertexAttribIPointer. Feel
Hi Robert,
I have a serious, massive memory leak in the trunk (updated this morning at
around 10am UK time). I haven't tried with previous releases yet.
The following code leaks memory in a very important manner. Put this code
within a while (true) { testLeak(); } block and you should see
Hi Robert,
I am a bit puzzled as the second problem I am having at the moment yields a
very different behavior when isolated out in a small repro.
This time I get an assertion.
Attached is a modified osggeometry.cpp file that triggers the assertion I'm
talking about. The assertion is raised
Same issues with OSG 2.9.10.
Any idea about what's going on?
Cheers,
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=39466#39466
___
osg-users mailing list
osg-users@lists.openscenegraph.org
Hi,
Thanks Stephan, you actually replied to my original message. I have since
edited it and suggested to concentrate on the leak issue. The other issue I
have is still under investigation and yes, the code I had originally posted
(too fast) to illustrate the second issue was incorrect ;)
Hi,
The second issue I had was that by design OSG doesn't assume the bounding box
of a drawable should be recalculated when setting a computeBoundingBoxCallback
up on the object.
In other words I was expecting Drawable::_boundingBoxComputed to be set back to
false when setting the callback.
It looks like a problem with OSG to me. OSG should be using
glVertexAttribIPointer instead of glVertexAttribPointer when dealing with
integer attributes.
See:
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflatNumber=296704#Post296704
Not sure this is the actual problem but it
Hi,
I can successfully bind a Vec3 attribute to my vertices. But somehow, calling
setVertexAttribArray with an IntArray doesn't work for me.
I have attached a modified osggeometryshaders.cpp file to illustrate my issue.
Summary of changes:
Code:
// all shader versions changed to #version 150
Paul Martz wrote:
On 5/6/2011 4:05 AM, Fred Smith wrote:
You usually want to create GL batches that are as large as possible. 10
draw calls of 100 elements will be slower than 1 draw call of 1000 elements.
Something else I have noticed a little while ago was that the stateset
It means that there seems to be a cost in doing state changes with OSG. GL
state changes do not necessarily incur any cost - this is implementation
specific anyway - whereas OSG state changes seem to always incur one, as even
going through empty statesets does incur a cost.
Is that clear
Hi Robert,
The stateset performance issue, if ever encountered, can easily be worked
around with a draw callback using applyAttribute/applyTextureAttribute prior to
rendering every drawable.
This is a very slim point where there might be room for improvement, and what
you said perfectly makes
You usually want to create GL batches that are as large as possible. 10 draw
calls of 100 elements will be slower than 1 draw call of 1000 elements.
Something else I have noticed a little while ago was that the stateset
processing engine of OSG might be slow. One thing I have been surprised with
robertosfield wrote:
Please try the latest updates to svn/trunk, it may or may not address
the issues you have seen. If it doesn't please put together a small
example that reproduces the problem.
I tried the trunk updated about an hour ago and I am still having problems. I
see the memory
Hi,
I have an application using a single, unique viewer doing occasional RTT by the
means of an ABSOLUTE_RF slave camera with a distinct scene graph. I had chosen
a while ago to do my RTT this way and not use a separate context.
I have a tool that pregenerates lots of textures, each texture
Hi,
I must have been staying too long in front of my computer lately...
The following code works fine:
Code:
// GLSL
out vec4 out_color;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
out_color = gl_Color;
}
// Trivial fragment shader code omitted
hybr,
I am not running a debug build.
Just create a scene graph with 100 geometries, and attach a texture attribute
to the stateset of each geometry. Now compare the framerate with the same scene
graph, no stateset created, but a draw callback set up inside which you call
Hi,
I'm working with both ATI/AMD and nVidia hardware, almost on a daily basis.
I've had a lot of bugs with AMD drivers with regards to GLSL shader
development, mainly in the compiler. It's been six months I am working with
them to help them fix bugs, which they have done, but I still have a
Hi,
I'm trying to share a VBO and an EBO between multiple primitivesets of
different geodes. My structure is the following (I always have only one
primitiveset per geode, hence my question)
+ Geode
--+ PrimitiveSet
+ Geode
--+ PrimitiveSet
+ Geode
--+ PrimitiveSet
and so on
I've tried to do
Hi,
I have a very large scene graph where I have many geodes.
Each of this geode has a stateset with a GPU program defined on it, contianing
a vertex and a fragment shader.
The GPU program instance is unique for my whole application - I am not
recreating the program over and over again for
Hi JP,
J.P. Delport wrote:
Hi Fred,
maybe have a look at this:
http://www.bricoworks.com/articles/stateset/stateset.html
I remember something like that state equality is only tested using
pointers and not internal data. So, maybe you'll have to explicitly
share state (that
Things seem to be working today, although I need to double check everything to
make sure I'm not dreaming.
glUseProgram errors only happen on ATI hardware with Catalyst 11.2 drivers - I
haven't tried 11.3 yet. My nvidia setup seems fine.
As for the GL context version, source code modification
hybr wrote:
Hi Fred
You can check if issue is somehow related to bounds calculation by using
setCullingActive(false) on your node or geometry.
Cheers, Sergey
The issue is not related to bounds calculation. My bounding box is calculated
correctly. Adding dummy geometries around my
Juan Hernando wrote:
I'll be very busy until next Friday so I can't answer to you properly. A
quick answer is that I just replicated what I saw inside the OSG code
for other textures. If you're sure that the binding is not needed,
removte it. I'll try to come back to this issue later and
I can get my square if vertices reside in attribute slot number 0, not 6.
But in this case, my shader needs to be:
Code:
// vertex shader
#version 150
uniform mat4 osg_ModelViewProjectionMatrix;
in vec3 in_vertex;
void main(void)
{
gl_Position = osg_ModelViewProjectionMatrix * gl_Vertex;
hybr wrote:
As i understand you set your vertex array in osg to vertexattrib slot 0,
setup vertexattrib binding for shader program to slot 0 for in_vertex. What
result you have when use in_vertex in shader instead of gl_Vertex, exactly?
Did you convert your in_vertex to vec4 with w=1.0 in
Things are extremely weird. Using in_vertex does actually work, I had
misspecified the binding location. So the following line works
// both gl_Vertex and vec4(in_vertex,1.0) work here
gl_Position = osg_ModelViewProjectionMatrix * vec4(in_vertex,1.0);
Actually this works even if I do NOT call
There must be something I'm missing.
Below is a repro of my problem. This is a modification of the
osgvertexattributes sample, from the trunk.
Copy/paste the following code after createSimpleTestModel:
Code:
class MyCallback : public osg::Drawable::ComputeBoundingBoxCallback
{
public:
Peter Hrenka wrote:
Do you set the initial Bounding Box?
I think with generic Vertex Attributes
OpenSceneGraph has no chance calculate
the Bounding Box by itself.
I calculate the bounding box myself and I set up a bounding box calculation
callback for every geometry that I create through
Things work if I call setVertexArray. I mean that I have to call
setVertexArray, in addition to setVertexAttribArray, even though my shader is
solely based on vertex attribute data and doesn't use GL2-style dedicated
vertex coordinates data.
Code:
// This line should not be needed but seems
Peter Hrenka wrote:
On the GL trace side I don't see anything wrong when I don't call
setVertexArray:
glVertexAttribPointer(...);
glDrawElements(...);
When I also specify a vertex array the glVertexPointer call obviously seems
superfluous:
glVertexPointer(...); //
I don't believe my problem comes from the bounding box, but out of curiosity:
the osgvertexattributes sample does not seem to specify a compute bounding box
callback, so how come the camera is set up correctly?
--
Read this topic online here:
This is funny, I knew that I'd have to tweak my application but somehow
expected OSG samples to work right out of the box, unmodified. Silly...
Thanks for your reply.
Even though I saw the osgvertexattributes samples I'm having problems switching
to vertex attributes, that's probably worth
Hi,
I can't get vertex attributes to work in my application.
Right now my aim is to display geometry with just any color, meaning I can have
very simple shaders, to start with.
I call setUseModelViewAndProjectionUniforms(true) on application startup to
make sure I can get the MVP matrix as a
Hi,
I'm trying to have OSG run inside a GL3 core profile context. I have downloaded
gl3.h from opengl.org and followed the instruction on this thread to properly
configure cmake:
http://forum.openscenegraph.org/viewtopic.php?t=4224
That is:
OSG_GLU_AVAILABLE OFF
OSG_GL1_AVAILABLE OFF
robertosfield wrote:
Hi Fred,
This is likely to be down to the lack of an OSG_EXPORT in the
declaration of View::Slave. Slave used to implemented entirely in the
header, but now has methods implementated in the .cpp.
I've just checked in the addition of an OSG_EXPORT, could you try this
Hi Chris,
Working with VC++ 2010 with no SP1 installed (yet) - I am not seeing any
unusual behavior.
Cheers,
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=37675#37675
___
osg-users mailing
Hi,
Fresh SVN checkout, VC++ 2008 debug build - I get the following error when
building my own application:
View.obj : error LNK2001: unresolved external symbol public: virtual void
__thiscall osg::View::Slave::updateSlaveImplementation(class osg::View )
The problem seems to disappear when my application stops calling the following
method:
Code:
osg::View::getSlave
Any recent change in the source code that would cause this unexpected link
problem?
Cheers,
Fred
--
Read this topic online here:
Nobody here?
I thought there was a way to remove the makeCurrent(context) /
makeCurrent(null) calls inbetween the rendering of frames.
I believe there was a message of Robert about this feature.
Isn't it the case?
--
Read this topic online here:
Sweet. This was the very method I had in mind but couldn't remember of the name
of.
Thank you !
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=37241#37241
___
osg-users mailing list
Hi,
If I recall correctly, there is a setting somewhere to set up an 'optimistic'
rendering loop whereby OSG doesn't wrap rendering with makeCurrent(context id)
/ makeCurrent(null), always leaving the unique context active. Can't remember
where this setting is. Could you refresh my memory?
Hi Juan,
I finally managed to make it work. Can't understand exactly what was wrong, I
got mangled into several issues at the same time (signed/unsigned sampler1D vs
buffer, ATI (bogus) compiler vs nvidia, texture originally too large, incorrect
pixel formats...). I just ended up checking
Hi,
My simple fragment shader with textureSize() seem to work fine when dealing
with a usampler1D uniform (the problem I had at one point was that I was giving
a texture that was too big for the driver, hence the size ended up being
actually 1, and my test was always failing).
Hi Juan,
Thanks for your reply. I'm still having problems trying to make use of
texelFetchBuffer with your code.
Why texture pixel format are you using? I'm using an integer texture format.
Not sure if OSG's allocateImage method behaves ok in this case.
I am creating the texture the following
fred_em wrote:
Code:
[...]
tb-setInternalFormat(GL_RGBA8UI); // 4 bytes per pixel, R-G-B-A format as
per EXT_texture_integer formats specification
osg::Image *image = new osg::Image();
image-allocateImage(128*128, 1, 1, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, 1); //
note: width=(128*128),
Hi,
When I compile your code I get the following errors:
Code:
error C2065: 'GL_TEXTURE_BUFFER_EXT' : undeclared identifier
error C2039: 'Extensions' : is not a member of 'osg::BufferObject'
error C2039: 'getExtensions' : is not a member of 'osg::BufferObject'
[...]
I don't see any
Hi,
OK, I temporarily replaced Extensions with GLEW, things moved forward, but I'm
stuck with the following compilation error now:
Code:
} else if (_image.valid() _image-data()) {
/* Temporary copy */
osg::ref_ptrosg::Image image = _image;
/* Creating the texture
Hi Juan,
I managed to make some progress.
- I replaced osg::BufferObject::Extensions with GLEW
- I changed the other offending expression:
generateTextureObject(contextID, GL_TEXTURE_BUFFER_EXT);
to
generateTextureObject(this, contextID, GL_TEXTURE_BUFFER_EXT);
Things compile fine.
I
I figured out why texelFetch wasn't being compiled successfully.
1) I forgot the last argument 2) I last tried on AMD/ATI hardware, and the GLSL
compiler on this platform doesn't seem to actually recognize 'texelFetch'...
Now moving on with my tests.
--
Read this topic online
Hi everyone,
Are Texture Buffer Objects supported in OSG? From what I can see, I have to
create and manage them myself.
Cheers,
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=35695#35695
___
own
TextureBuffer::TextureBufferObject class. Nevertheless it worked for me.
I can contribute the code for others to use and review for future inclusion.
Regards,
Juan
On 17/01/11 14:55, Fred Smith wrote:
Hi everyone,
Are Texture Buffer Objects supported in OSG? From what I can
Hi J.P.,
Let's say you have 1000 textures. You can't create 1000 PBOs as this would
consume far too much memory.
What you do is play with 2 PBOs. Have a look at the diagram here, in the figure
called Streaming texture uploads with 2 PBOs:
http://www.songho.ca/opengl/gl_pbo.html#unpack
You
Hi Robert,
Let's leave GPU - CPU transfers aside. I don't mind if they are slow.
You usually have to use 2 PBOs.
If I use a single PBO to upload texture data to the GPU, performance will be
very low.
It seems to me I have two ways to do CPU - GPU transfers efficiently.
1) Use 2 differents
fred_em wrote:
Hi,
I am confused about how to use PBOs in OSG. I browsed the forum quite a bit
but still can't make my code work.
1) GPU - CPU pixel transfers
As I understand it, an osg::Image with an osg::PixelBufferObject cannot be
used (yet) to read FBO contents. I have actually
Hi,
drawImplementation will be called once as display lists are used by default.
Call Drawable::setUseDisplayList(false) to indicate you don't want a display
list to be created, which will lead OSG to call drawImplementation every time
your drawable needs to be rendered.
Cheers,
Fred
Hi,
I am confused about how to use PBOs in OSG. I browsed the forum quite a bit but
still can't make my code work.
1) GPU - CPU pixel transfers
As I understand it, an osg::Image with an osg::PixelBufferObject cannot be used
(yet) to read FBO contents. I have actually checked that and could
Hi,
In a drawable cull callback, I have found that State::getModelViewMatrix() and
State::getProjectionMatrix() are different from the matrices returned by
Camera::getViewMatrix() and Camera::getProjectionMatrix().
I actually can't perform world to screen space coordinate conversion using
Tim Moore wrote:
On Thu, Dec 2, 2010 at 9:18 AM, Fred Smith () wrote:
Hi,
In a drawable cull callback, I have found that State::getModelViewMatrix()
and State::getProjectionMatrix() are different from the matrices returned
by Camera::getViewMatrix() and Camera
Hi,
The way I should be using Element Buffer Objects is unclear to me.
If I use the following code:
Code:
osg::Geometry* polyGeom = new osg::Geometry();
polyGeom-setUseVertexBufferObjects(true);
osg::ref_ptrosg::DrawElementsUShort de = new
robertosfield wrote:
Hi Fred,
You interpretation of what Geometry::setUseVertexBufferObjects(true)
is correct, so what you are see looks to be a bug. Which version of
the OSG are your using?
A couple of weeks ago I was looking into VBO setup issues relating to
the new serializers and
Robert,
Out of curiosity, why does the osgparametric sample explicitly set a VBO and an
EBO, in addition to calling setUseVertexObjects(true)?
Thanks,
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=33997#33997
I am not surprised this message has been left unanswered as it is not rocket
science to release shader objects.
It might be a bug, or at least a very unfortunate behavior, in the AMD Catalyst
drivers, see:
JVM won't die after shader compile
I can't test your code right now because the code I am currently using in a
geometry shader is compiled incorrectly on my machine and I can't make it work!
I have reported a bug to AMD that they are currently investigating.
I will test the code in the future, I just don't exactly know when.
Hi,
I need to define an object that can be seen as a sphere - at least this is the
most convenient analogy I can find right now.
- it is defined by meta-properties, essentially, like by its radius and a
center position. It does not contain actual geometry data per se. The actual
rendered
Paul Martz wrote:
The Drawable level is entirely appropriate. Shaders can be attached per
Drawable.
Why not just use an empty Geometry, with appropriate shaders attached?
--
-Paul Martz Skew Matrix Software
http://www.skew-matrix.com/
Hi Holger,
I am too very interested in OSG support for OpenGL 4.x tesselation shaders.
Maybe I can test your code, feel free to contact me in private.
Not sure if your code already made it into the trunk (doesn't seem like it as
far as I could see)?
Cheers,
Fred
--
Read this
Hi,
I'm in a situation in which my Win32 process takes a lot of time to exit,
something like 15-20 seconds. This happens as soon as I attach shaders to my
geode.
Something I want to do first is to make sure I release OSG shader-related
resources completely before exiting the application.
Thanks for your replies.
I understand what's going on with osggeometryshaders with respects to the
camera.
I confirm turning culling off (I use osg::Node::setCullingActive(false) on my
geode) makes it work.
Thanks again.
--
Read this topic online here:
robertosfield wrote:
This does suggest that small feature culling may well be the cause.
I'm surprised my suggested change didn't work for you, perhaps
something went amiss with the application of the change. Others in
the past have come up across this issue and solved it roughly the way
Hi,
When I only have one point in a GL_POINTS DrawElements primitive, the shaders
in my StateSet have no effect. It seems a primitiveset cannot contain use only
one point, or the point does not reach the shaders for some reason.
To reproduce the problem, change the osggeometryshaders sample so
robertosfield wrote:
Hi Fred,
My guess is that small feature culling is culling the Geometry before
it even gets into the draw traversal. You can switch off small
feature culling by doing:
viewer.getCamera()-setCullingMode(viewer.getCamera()-getCullingMode()
Hi JP,
As soon as I attach an image, OSG crashes on me. The problem arises in
FrameBufferObject::Pimpl::Pimpl.
Code:
// This code crashes
osg::Texture2D *texture = new osg::Texture2D();
texture-setTextureSize(1024, 1024);
Frederic Bouvier wrote:
Maybe :
textureDepthStencil-setInternalFormat( GL_DEPTH_STENCIL_EXT );
...
camera-attach(osg::Camera::PACKED_DEPTH_STENCIL_BUFFER, textureDepthStencil);
Could work. As the depth and stencil buffer are packed, so should the texture.
(re)read
I've also tried with an image:
Code:
Image *pdsImage = new Image();
pdsImage-allocateImage(256, 256, 1, GL_DEPTH24_STENCIL8_EXT,
GL_UNSIGNED_INT_24_8_EXT, 1);
camera-attach(PACKED_DEPTH_STENCIL_BUFFER, pdsImage);
This code results in the following error and then the application crashes.
This looks like an OSG bug to me. The code is in osg\Texture2D.cpp, in
Texture2D::apply.
With PACKED_DEPTH_STENCIL_BUFFER, internalFormat should be
GL_DEPTH24_STENCIL8_EXT, format should be GL_DEPTH_STENCIL_EXT and type
GL_UNSIGNED_INT_24_8_EXT.
The current OSG 2.8 code makes this
Vivien, Jordi,
Thanks, it works fine. Somehow I missed this one.
ARB_texture_non_power_of_two and co. Got it.
Fred
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=32645#32645
___
osg-users mailing
The following works fine:
camera-attach(osg::Camera::PACKED_DEPTH_STENCIL_BUFFER, GL_DEPTH_STENCIL_EXT);
Why doesn't it work if I create a texture? I need to create a texture and an
image, as I want to retrieve stencil buffer data after frame() has been called.
Cheers,
Fred
--
1 - 100 of 117 matches
Mail list logo