Hi,
I'm trying to measure the render time of my application. The render time is
defined as the time between the start of a new frame (begin of frame loop) and
the time where all operations on CPU and GPU for that frame have finished.
However, I want to exclude swapBuffers from the measurement
Hi,
I have a little annoying issue here and fail to understand why it happens or
what goes wrong.
Im using a shader to compute various output Textures, some for direct display,
others for post processing.
To be able to easily display the rendered Textures, I created a little helper
class
Hi,
ah nice, thats even better. Will experiment with it.
Thank you!
Cheers,
Philipp
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68489#68489
___
osg-users mailing list
Hi,
I want to measure the frametime without waiting for vsync. On my test system I
have no way to turn vsync off, but I still need to benchmark.
So the plan was to call glFinish() after the rendering traversal is over, but
BEFORE swapBuffers() (because thats the call that actually blocks until
How do I provide a graphics context for a viewer? I see there is a method
returning all graphics contexts, but I cant find any methods to set the
context, and the constructor of osgViewer doesnt take a graphics context either.
Maybe there is an easier way to accomplish what I need? I want to
Hi,
I'm trying to create a custom graphics context to replace the default X11
Graphics context (I need to add some additional code).
To do this, I created a new "CustomGraphicsContextX11" class, deriving from
"PixelBufferX11". I reimplemented the virtual methods that I need to adjust.
Then, I
Hi,
fixed it by calling
Code:
std::locale::global(std::locale());
Just after QT/GTK initialization.
Thanks guys.
Cheers,
Philipp
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=68342#68342
___
Hi,
Ive made further tests. It indeed seems to be a locale issue. When replacing
all "." with "," for floating point numbers, I get an almost correct result.
So I guess now I need to figure out how to enforce a certain locale for the
loader?
Thank you!
Cheers,
Philipp
PS: My system language
robertosfield wrote:
> Hi Philipp,
>
> Is there any chance that the COLLADA_DOM assumes a certain locale
> while Qt is changing it?
>
> Robert.
Hi Robert,
interesting idea. However, the loader seems to parse texture paths correctly.
Wouldnt that also be messed up if the issue was caused by
Hi,
Im trying to build a collada model viewer using osg, the osgdb_dae plugin and
QT for the user interface.
For integrating osg into QT, I have followed the example implementation here:
https://github.com/openscenegraph/OpenSceneGraph/blob/master/examples/osgviewerQt/osgviewerQt.cpp
I can see
Hi,
thank you for your input. I've resolved the problem by marking a couple more
StateSets as dynamic.
Maybe it would be a good idea to include the threading + dynamic nodes hint in
the official documentation? Personally I wasnt aware that data variance
settings influence threading behavior.
Hi,
I want to track the delta movement of every fragment in my scene. Only tracking
on a per object basis would not be enough because an object may, for example,
rotate (so that one side of the object approaches the camera, the other side
doesnt) or it may have moving parts.
You are talking
Hi,
thanks for your (as always) really quick response!
I'm trying to achieve various things. As a simple example, I want to measure
the time it takes to render 1 frame (CPU and GPU time).
For that, I used something along the lines of:
> //..
> start = highPrecisionTimer.now();
>
Hi,
I'm using multiple cameras and want them to render the scene in parallel to
increase GPU load. For that, I set the threading model of my Viewer to
"ThreadPerCamera".
That all works fine, however, I'm facing the issue that the viewer seems to
begin the next frame before the current frame
Hi,
> If you only need the movement towards/away from you, you can use the
> previous frames depth and perform difference computation based on the
> difference of the linear depth.
Im not exactly sure what you mean here. Are you talking about rendering the
depth of each frame to a texture
Hi,
I'm using shaders to do some pre computations for a realtime radar simulation.
(Some further processing is done with CUDA) The delta distance is required for
the calculation of the doppler effect.
So yes, I guess I'm using shaders in a weird way. Unfortunately it would be
very difficult
Hi Robert,
unfortunately some objects move in my scene, so its not enough to only hold the
old view matrix.
The camera AND any object can move.
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67932#67932
Hi,
that was my first idea as well, however, then I realized that this approach
does not work.
The problem is that there is no way to know if a certain pixel still shows the
same fragment. For example, if the camera view angle changes by 180 degrees in
one frame, the pixel at the 0,0 texture
My goal is to create a fragment shader that computes the delta distance to each
fragment compared to the previous frame.
In the shader, I can easily calculate the current distance to a fragment by
using built in functions, the problem is that I also need access to the
fragment position in the
Hi,
is it possible to retrieve the current modelView matrix of a billboard node?
For matrix transforms, one can simply use ->getMatrix() and multiply that with
the current view matrix.
I noticed that there is a method called "computeMatrix", but I'm a little bit
confused in how to use it. What
Nevermind, forgot to change my texture sampler from sampler2D to
sampler2DRect...
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67874#67874
___
osg-users mailing list
Hi,
im using several RTT Cameras in my scene. So far, I have only rendered to
osg::Texture2D and that worked fine, however, I need to render to NPOD textures
now so that will no longer work.
For that, I changed my render target to a osg::TextureRectangle and left
everything else pretty much
Hi,
setting useDisplayLists to false indeed fixed both issues. Thank you very much.
Cheers,
Philipp
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=67835#67835
___
osg-users mailing list
(and therefore also of the drawCallback?)
operations.
robertosfield wrote:
> Hi Philipp
>
> On 15 June 2016 at 14:48, Philipp Meyer <> wrote:
>
> > figured it out.
> > One needs to use
> >
> >
> > Code:
> > viewer->setReleaseContextAtEndOfFram
Hi,
so here is a little update on my current progress. I have a working solution,
but im not 100% happy with it as it is pretty messy and offers bad performance.
The basic idea is to assign a uniform variable to each and every transform node
of the scene graph, storing its total modelMatrix so
Hi,
I am currently working on a Shader that is supposed to color fragments
approaching the camera red and fragments departing the camera green.
So for example, if an object in the scene is traveling towards the camera, it
should be rendered red, otherwise green.
For that, my basic idea was to
Hi,
figured it out.
One needs to use
Code:
viewer->setReleaseContextAtEndOfFrameHint(false);
to prevent the context from getting released after a frame is rendered.
That way, its resources, like textures, can still be accessed after the frame
completes.
Thank you!
Cheers,
Philipp
Hi,
I'm currently facing some issues in passing a texture2d created via OSG to the
CUDA low level driver API. I'm trying to run a cuda kernel on a texture after
calling viewer->renderingTraversals();
As far as I have understood, all thats required is getting the underlaying
texture ID for
Hi,
I was able to figure out the issue.
For everyone wondering, I was missing the following line:
textureImage->setInternalTextureFormat(GL_RGBA16F_ARB);
In other words, one needs to set the format on the image as well as on the
texture for everything to work properly. Hope this helps someone
Hi,
I did some more testing and it turns out that I can set a texel to a color with
values > 1.0 just fine in the C++ code.
When using image->setColor(osg::Vec4(1,2,3,4),x,y,0) before reading it with
getColor, I can get results > 1.0.
Does that mean that the shader itself is clamping the
Hi,
for my current project I need to do some computations in the fragment shader
and retrieve the values within my application. For that I am using the render
to texture feature together with a float texture.
I'm having some trouble reading values > 1.0 though. It seems like the values
are
Hi,
so after countless more hours of debugging I have identified the issue.
Within the "setUpEGL" function I already set the eglContext to be current. So
once the "makeCurrentImplementation" is called, the context is already set to
be current. For some reason, when using singlethreaded
Hi,
I tried to orientate on "PixelBufferX11" when creating the EGLGraphicsContext,
and I noticed that PixelBufferX11 calls the "init()" method in the constructor
as well as in the "realizeImplementation" method, if necessary. So my call to
"realizeImpl" in the constructor is pretty much just
Hi,
I added the source code for the custom graphicsContext. Sorry for the delay.
Thank you!
Cheers,
Philipp
--
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=66918#66918
___
osg-users mailing list
DISCLAIMER: I'm not a graphics or OpenGL expert, so if something is dumb or
doesnt make sense please let me know.
Hi,
I am trying to use OSG to create an application for a real time Linux system
without a windowing system.
To get OSG to work properly, I create my own GraphicsContext and
35 matches
Mail list logo