Bryan,
My initial thought was that nowhere were you saying that the image was
floating point. Digging further, I realised that TransferFunction should be
doing it for you - I've never used this before - but this line (in
osg/TransferFunction1D.cpp) looks a little odd to me:
_image->setImage(n
Max,
For starters, you probably want GL_RGB8 (0x8051) and not GL_TEXTURE_2D
(0x0DE1) in your setImage call.
But in general it looks a bit odd to me, and I'm not sure what your
intention was. First you get the pointer to the textures image, and then you
set it to something else. I imagine you just
Okay, so something like this should work, I guess.
void updateTexture( IplImage *img, ref_ptr geode)
{
ref_ptr state = geode->getOrCreateStateSet();
ref_ptr Tex = (Texture2D*)
state->getTextureAttribute(0,StateAttribute::TEXTURE);
Tex->setImage(img);
return;
}
_
Robert,
Not that I want to hijack the thread, but a small (more OpenGL) question on
this area, as it has always confused me:
The internalTextureFormat is the pixel type when stored down on the
> GPU. Typically it's be GL_RGB, or just 3 (number of components) for
> RGB data. Both the internalTex
Hi,
FYI, there was a posting of a (presumably similar) WASD type manipulater by
Viggo Løvli back in August 08 - seach the archives for "How to write a
camera manipulator..."
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://list
Chuck,
I have had similar issues (with crashes in releaseGLObjects when views get
destroyed) but can't actually recall what I did to fix them.
You could try calling viewer->releaseGLObjects() before you destroy the
view. (previous posts seem to suggest that this might be the right thing to
do)
D
Rob,
What image format are you actually loading?
Regards,
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org
J-S (and others),
You could look at doing this is the same way the depth partition node does
it, which is what I do:
I use a class based on an OSG camera with an overriden traverse method, that
forces the projection matrix to a particular z near and z far. Oh, and the
camera has setComputeNearFar
Chris,
> After some brain-twisting, I did realize that even with z comparison off,
> OGL is
> probably rejecting the skydome because it's beyond the far clip plane. I've
> been trying to
> think of a way to fool this, but it seems like it is unavoidable.
>
That's exactly what I found (or even w
Hi J-S,
The problem when the skydome renders last is that it won't be blended
> correctly with transparent objects (they need to be rendered after all
> opaque objects, and sorted back to front).
>
Ah. I hadn't considered that in detail. (I wonder what my app's behaviour is
then? I don't have man
Hi.
The WindowSizeHandler (in osgViewer/ViewerEventHandlers) does exactly this.
Look at the "toggleFullScreen" method.
Better yet, just add a WindowSizeHandler to your viewers event hander list
with
viewer.addEventHandler(new osgViewer::WindowSizeHandler());
Hope that helps,
David
Kim,
A nice piece of work.
In case it helps anyone, for FFTW on Windows, I used 3.2.1. I didn't bother
compiling it, but just did this:
http://www.fftw.org/install/windows.html
which worked fine (even with the free Visual C++ 9.0 ! )
I had a compile of minor issues with addCullCallback - I gue
Umit,
I implemented my ocean surface which is composed of Sum of Sinus method
Bear in mind that so long as your wave numbers are integer subdivisions of
the "tile" size, the result from an FFT approach is the same as the result
from a sum of sinusoids approach, just higher performance.
David
_
Umit,
>...Error : When I open up the osgOceanExample there is some error in vertex
shader as you can see from the attached screenshot.
I had something similar - I think this is just coz the shader "constructor"
can't find the underlying shaders; AFAIK the resource folder has to be
located in the
Brian,
> A GPU solution is great for the rendering, but it doesn't support all the
other things on the cpu side that need to know about the ocean.
Agreed. However if that number of CPU items is limited in terms of sample
points, then the GPU/CPU tradeoff still can favour the GPU i.e. full ocean
F
Kim,
For very large
> expanses of ocean the problem I forsee is the time it takes to update the
> vertices and primitive sets.
> However, since the FFT technique is tileable it would be possible to only
> update 1 tile and then translate
> them into position using a shader. This would rule out any
Kim,
it does sound difficult to implement as a general case.
Actually, I think that the only way to meet all current techniques with as
common a scenegraph structure as possible is to render the heightfield to
texture (generated somehow, e.g. FFT on CPU, sum of sines, FFT on GPU), then
use vert
Robert,
At the moment the OBJ loader builds a material to stateset map, which is
indexed by the material name. However when the stateset is applied to a
geometry, the material name is effectively lost.
Also the OBJ loader reads the group name (i.e. "g groupname" in the .obj)
and any object name (
Although be a little wary... I have had problems (in the past, so I don't
know whether they are still there) with particular driver implementations -
mainly ATI - not actually populating some of the GLSL required matrices,
which caused me no end of confusion
I ended up just defining a set of unif
I force draw order in an Ortho HUD by deliberately using the Z-coordinate (
e.g. via a transform) to define "layer" of particular components. Nice and
easy - is this possible for you?
David
___
osg-users mailing list
osg-users@lists.openscenegraph.org
ht
I think I might be having a related problem. I have recently updgraded from
OSG1.2 to 2.1.1. In my old app, I had a node inherited from Camera, loosely
based on the osgDepthPartition example, which worked as a parent for
skyboxes/spheres. It used to:
1) prevent its children from participating in t
> Only a rename...
I suspected that. Does that mean that _my_ problem (which could be distinct
from Adrians, so apologies for thread hijack) lies more in the change from
osgProducer::Viewer to osgViewer::Viewer? Where would be a good place to
start looking?
Thanks,
David
___
Robert,
Here we are again!
Firstly, Farshid is right, even up to OSG 2.1.7. It looks like the "more
elegant" code I submitted way back in the thread "numerical precision error"
of 26/07/06 does break for this case. In particular :
-1 0 0
0 0 -1
0 -1 0
should give a quat of (W,X,Y,Z)=(0,0,0.707,
Dear All,
Does anybody use the DDS plugin to read and write DDS files? I am a bit
confused with regards to RGB vs. BGR; even looking at the DDS plugin code,
I'm still not sure:
1) Does the DDS file format _demand_ that the byte order is BGRA? As far as
I can tell, it doesn't, because you specify
Just to add my 2p, the changes I submitted would only be seen as colour
changes (red/blue swap) on dds write; I didn't do anything with regard to
file sizes.
The image is DXT1 encoded, and includes 6 levels of mipmap. I had a quick
look, and it's falling over in the osgImage.release() method at th
Ah - just noticed. The DDS file you attached claims to have 10 mipmaps, but
there is only data for (and it only makes sense to have) 7. So, firstly,
something looks broken in whatever wrote the file, in terms of retrieving
the number of mipmaps. Setting it to a low value during the image load
works
I do exactly that, using a camera which does the following:
1) Sets DO_NOT_COMPUTE_NEAR_FAR so that the skydome doesn't influence the
auto-z calculation; otherwise large skydome radii will skew the rest of the
scenes znear/zfar
2) Forces the projection matrix so that the skydome radius is within
w
I've had similar problems due to the optimiser optimising away "empty"
groups that had attached statesets. Are you optimising? If you are, try
outputting the file as .osg and comparing it with and without an optimiser
pass.
David
On 02/10/2007, Riccardo Corsi <[EMAIL PROTECTED]> wrote:
>
> Hi Ced
Question 1) : You are trying to retrieve the Euler angles (i.e. rotation
about X,Y,Z in some order) from a matrix. You can google for "Euler" and
Matrix; alternatively start here:
http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToEuler/index.htm
Question 2) : As mentioned
The HDR plugin does support raw passing of the RGBE values in (through RGBA)
via the RAW option.
FWIW I never use the RAW option as my shaders are already quite taxed enough
without having to decode to float; I'd rather load the image as a float and
take the processing hit on the CPU at load time.
I'm not sure if this helps, but I have in the past found that calls along
the lines of :
geometry->addPrimitiveSet(new
osg::DrawArrays(osg::PrimitiveSet::QUADS,0,vertices->size()));
can take ages. In particular, I think the DrawArrays call can take a while.
On an app of mine, I was doing the abov
Michael
> - With the option "RGB8" ( osgDB::ReaderWriter::Options( "RGB8" )
> )
>
> o The image is converted to LDR before being stored in a texture.
>
> * Doesn't that defy the whole point of being HDR to begin with*?
>
Yes, although it allows you to at least read in RGBE / Radia
Familiar looking stack trace...
Not that this will help you, but we were getting this an awful lot in both
release and debug builds. The specific situation was that we had a subclass
of an OpenThreads thread that had an osgViewer::Viewer as a member - or
actually, we had a ref_ptr to the viewer as
Dear Baker Searles (Bradley?),
Way back when I was looking at OBJ import, I found these useful:
http://paulbourke.net/dataformats/obj/
http://paulbourke.net/dataformats/mtl/
but these are just repeats of what the wotsit.org information has.
Here Ns is described as up to 1000.
However exporters
101 - 134 of 134 matches
Mail list logo