Re: [osg-users] StateSet and Text, state sets may be shared inadvertently

2012-01-06 Thread Juan Hernando

On 05/01/12 18:25, Robert Osfield wrote:

Hi Juan,

osgText::Text is a special case in that the way the text is rendered
is using textures and to for best performance and memory usage you
need to share these textures, and also share the rest of the text
state as well.

OK, I see, I didn't think of the textures.


If you want to decorate the Text with your custom StateSet than it's
best to place it on a Geode above the Text object.
Does that mean that if I use a StateSet created by myself for each text 
style and then do setStateSet in each text object, the textures will be 
duplicated for each StateSet?
I'm asking because currently I have more drawables in the geode that 
holds the text and I prefer not having to change that.


Thanks,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] StateSet and Text, state sets may be shared inadvertently

2012-01-04 Thread Juan Hernando

Dear all,
Trying to use different StateSet with different osgText::Text objects it 
happens that all text objects transparently may share the same StateSet 
depending on how the state sets are created.
Image that we have two text objects t1 and t2, then the following 
calling sequences behave as follows:


Case 1
  t1-getOrCreateStateSet()
  // t2 with the default empty state set
t1 will have a new StateSet and t2 will inherit it from its parent.

Case 2
  t1-getOrCreateStateSet()
  t2-getOrCreateStateSet()
Both texts will use the same state

Case 3
  t1-setStateSet(a_state)
  t2-getOrCreateStateSet()
Each object will have its own state.

Case 4
  t1-getOrCreateStateSet()
  t2-setStateSet(a_state)
Each object will have its own state.

Case 5
  t1 // created first no stateset assigned
  t2-getOrCreateStateSet()
Both texts will use the same state

While it may make sense, it seems a bit weird to me.
If this is the expected behaviour I couldn't find where does it say so 
in the reference. Is it actually supposed to work like that and what is 
the reason behind it?


Thanks and regards,
Juan

PS I attach a simple program in case someone wants to verify my statements.
#include osgViewer/Viewer
#include osgText/Text
#include osgText/Font
#include osg/ShapeDrawable
#include osg/Geode
#include osg/Depth
#include osg/Material
#include iostream

int main()
{
osgViewer::Viewer viewer;

osg::Geode *geode = new osg::Geode();

geode-addDrawable
(new osg::ShapeDrawable(new osg::Box(osg::Vec3(0, 0, 0), 20, 20, 20)));

osgText::Text *text1 = new osgText::Text();
text1-setText(label1);
text1-setPosition(osg::Vec3(10, 10, 10));
text1-setAxisAlignment(osgText::Text::SCREEN);
geode-addDrawable(text1);

// Uncommenting this line and commeting the other two below will cause
// both labels to share the same text.
//state = text1-getOrCreateStateSet();
osg::StateSet *state = new osg::StateSet();
//text1-setStateSet(state);
//state-setAttributeAndModes
//(new osg::Depth(osg::Depth::ALWAYS, 0, 1, false));
osg::Material *material = new osg::Material();
//material-setDiffuse(osg::Material::FRONT, osg::Vec4(1, 0, 0, 1));
//state-setAttributeAndModes(material);

osgText::Text *text2 = new osgText::Text();
text2-setText(label2);
text2-setPosition(osg::Vec3(-10, -10, -10));
text2-setAxisAlignment(osgText::Text::SCREEN);
geode-addDrawable(text2);
material = new osg::Material();
material-setDiffuse(osg::Material::FRONT, osg::Vec4(0, 0, 1, 1));
state = text2-getOrCreateStateSet();
state-setAttributeAndModes(material);

viewer.setSceneData(geode);
viewer.run();
}
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Texture Buffer Objects status?

2011-03-24 Thread Juan Hernando

Hi Fred,

The call to glTextureBufferEXT, during the rendering, is not needed. It is only 
needed when you prepare the Texture Buffer, but not during the display.

Does it make sense to you if I comment out the following line of code below 
(see // COMMENTED OUT, below):
I'll be very busy until next Friday so I can't answer to you properly. A 
quick answer is that I just replicated what I saw inside the OSG code 
for other textures. If you're sure that the binding is not needed, 
removte it. I'll try to come back to this issue later and check whether 
it works for me or not.


Cheers,
Juan

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] LogicOp not available in OpenGL 3 version of OSG 2.9.11?

2011-03-22 Thread Juan Hernando

Dear all,
I'm porting an application from OpenGL 2.x to OpenGL 4.x and I've found 
out that LogicOp only works if OSG_GL_FIXED_FUNCTION_AVAILABLE is 
defined. I've checked the OpenGL spec for versions 4.1 and 3.3 and 
logical operations are part of it (no mention about deprecation). 
Furthermore, I've removed the macro from LogicOp.cpp and the code I had 
that was relying on it works fine. Is there any reason for having 
removed this feature that I'm not aware of?


Regards,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] LogicOp not available in OpenGL 3 version of OSG 2.9.11?

2011-03-22 Thread Juan Hernando

Hi Robert,

I don't recall the specifics of LogicOp, but my guess is that the
OSG_GL_FIXED_FUNCTION_AVAILABLE was probably introduced to enable the
GLES1+GLES2 builds.  You could try changing the defines uses to so
that only GLES builds disable LogicOp.
OK thanks, that makes sense. I'll try to send the modified file once I 
have the time to take a look more in depth.


Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Can I increment the gl_FragData?

2011-02-23 Thread Juan Hernando

On 23/02/11 11:54, Martin Großer wrote:

Ok, I didn't understood that. So, either way I need the ping pong
process. I am writing the ping pong version at the moment. I hope
this works but more about that later. :-)
In the blending case you don't need ping-pong if you use 
NV_texture_barrier 
http://developer.download.nvidia.com/opengl/specs/GL_NV_texture_barrier.txt 
as Sergey pointed out. I forgot about that.


Cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Can I increment the gl_FragData?

2011-02-22 Thread Juan Hernando

Hi Martin,
Depending on what you need to do, additive blend (as it has already been 
suggested) may work fine. Using ping-pong is not an option if you need 
to accmulate the output in a triangle basis because it requires a buffer 
swap per triangle to guarantee correct results.
Can you be a bit more specific about what type of geometry are you 
rendering into the target textures and what's the sequence of additions?


If you need something more advanced, read/write random access to memory 
buffers is actually possible, but you need a graphics card that supports 
 EXT_shader_image_load_store (at least NV GTX4xx or AMD HD5xxx). 
However, OSG does not support that extension at all and there are 
non-trivial synchronization/performance issues that you need to be aware 
of when writing your shader code (restricted, volatile and coherent 
variables; shader level and client level memory barriers; and atomic 
operations).


Indeed, I want to start making some experiments with that extension for 
order-independent transparency, but proper support from OSG is not 
straightforward. A new StateAttribute (similar to textures) is needed to 
handle the binding of texture objects to image buffer units (a new 
shader concept). Also, some mechanism, such as a post-draw callback or a 
new scenegraph node, should be provided to execute the memory barriers 
in the client side.


I'd really like to have this extension smoothly integrated in OSG 
because it is at least as interesting as hardware tessellation support. 
I can volunteer to do part of the coding, but first I need feedback in 
order to propose a suitable design.


More info here.
http://www.opengl.org/registry/specs/EXT/shader_image_load_store.txt

Regards,
Juan

On 22/02/11 11:06, Martin Großer wrote:

Hello,

that is bad. :-(
So, I want to blend (for example add) textures iterative.
Here in pseudo code:

while(1)
{
   Tex0 = Tex0 + Tex1; // Tex0 + Tex1 is calculate in a glsl shader
}

Thanks for your tips.

Cheers

Martin


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Can I increment the gl_FragData?

2011-02-22 Thread Juan Hernando

Hi Martin,

I don't understand the fact that I need a swap per triangle. In the
osggameoflife there is one swap per render pass.

It depends on what you want to do.
If you are rendering a fullscreen quad and performing some pixel level 
operations, then ping-pong textures are OK. This is the case of the game 
of life example.
However, if you are combining into each target pixel the contribution of 
all the triangles that hit that pixel, then you can't use ping-pong 
because you have to swap the read and write textures after each triangle 
(I assume you are not binding the same texture as input and output for a 
shader because it has undefined behaviour). For example, this would be 
the case when you want to count the fragments falling into a pixel. If 
you can't use blending operations to do your calculations, then 
EXT_shader_image_load_store may be the last resort.


 So my object is for
 example a wall (plane) and I want to add a texture (brush) to this
 wall like a airbrush. That is a simple test scenario.
Assuming your wall is screen aligned, I'd say that this airbrush example 
falls in the category of blending. However, if your actual intent is to 
develop a realtime texture painting tool then, it might be easier to do 
it in completely in the CPU (I'm not an expert in texturing so there 
might be some clever tricks to take advantange of OpenGL to do it with 
random access image buffers).



Ok first of all I should try the additive blending with osg::TexEnv.
After this I can try a example with the extension
EXT_shader_image_load_store. I have a NV GTX470, it should work. And
then we can talk about a integration in OSG. I can send the example
and we can discuss a suitable design. I am very interested in a
smoothly integration in OSG.

Me too ;)

Cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Texture Buffer Objects status?

2011-01-24 Thread Juan Hernando

Hi Fred,
Good to know that you could make it.
For sure, I don't think my implementation is the best posible. I just 
wanted to get something working quickly. My idea is that someone else 
can take it as a starting point for a future OSG implementation.


Cheers,
Juan

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Texture Buffer Objects status?

2011-01-21 Thread Juan Hernando

Hi Fred,

If you can provide a minimal code example I can test it on my machine 
and see what's the problem. Otherwise, for me it's impossible to know 
what's going on and I can only guess. By the way, did you try using 
GL_LUMINANCE instead of GL_RGBA?


Cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Texture Buffer Objects status?

2011-01-20 Thread Juan Hernando

Hi Fred


What texture pixel format are you using? I'm using an integer texture
format. Not sure if OSG's allocateImage method behaves ok in this
case.

I've only used setup code of this style:
image-setImage(size, 1, 1, GL_LUMINANCE32F_ARB, GL_LUMINANCE,
GL_FLOAT, data, osg::Image::NO_DELETE);
I don't know whether for other formats it may fail.


I am creating the texture the following way:


Code: bbp::RTNeuron::TextureBuffer *tb = new
bbp::RTNeuron::TextureBuffer(); tb-setInternalFormat(GL_RGBA8UI); //
4 bytes per pixel, R-G-B-A format as per EXT_texture_integer formats
specification osg::Image *image = new osg::Image();
image-allocateImage(128*128, 1, 1, GL_RGBA_INTEGER,
GL_UNSIGNED_BYTE, 1); // note: width=(128*128), height=1
tb-setImage(0, image);

If you use GL_RGBA_INTEGER then you must use isamplerBuffer or 
uisamplerBuffer otherwise the results are undefined.
For a normalized value I don't know what were you expecting but it can't 
be the same as with floating point or true integer formats.


The main differences between your code and mine are that I'm setting the 
internal texture format explicitly and I'm using a plain array of data 
instead RGB/RGBA formats.



Code: #version 150 compatibility #extension GL_EXT_gpu_shader4 :
enable

uniform samplerBuffer tex;

void main(void) { if (textureSizeBuffer(tex) == (128*128)) // size in
pixels gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // red (as I am
expecting) else gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); // green }
The result is always green here, not red as I am expecting.
I haven't used textureSizeBuffer before but from the docs I've read your 
code should pretty much work. Is this wrong also with a normalized 
texture format or a floating point format?


Cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Texture Buffer Objects status?

2011-01-18 Thread Juan Hernando

Hi,
Sorry for not answering before but I've been away from the computer. 
Regarding osg::BufferObject, in my OSG version (2.8.3) there is an 
Extensions class declared inside osg::BufferObject. Of course you can 
replace it with GLEW, but probably the other compile error is also 
related to the use of a different version.


Regards,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Texture Buffer Objects status?

2011-01-18 Thread Juan Hernando

Hi Fred,

I have never used TBOs before. How do you use your new class?
Is the following correct?


Code:
// Creation
osg::TextureBuffer *tb = new osg::TextureBuffer();
// This will create a buffer of 300 * 4 bytes = 1200 bytes
osg::Image *image = new osg::Image();
image-allocateImage(300 /*width*/, 1 /*height is ALWAYS 1, right?*/, 1, 
GL_RGBA, GL_FLOAT, 1); // GL_RGBA + GL_FLOAT = 4 floating point components for my 
color
// here, feed buffer with data
tb-setImage(image);

I don't see anything wrong here.


Code:
#version 150 compatibility
#extension GL_EXT_gpu_shader4 : enable

For me (NVIDIA 280GTX graphics card in linux), it works just to declare:
#version 130



uniform sampler1D tex; // shall I use sampler1D here, or sampler2D?


This has to be:
uniform samplerBuffer tex;


void main(void)
{
vec4 color = texelFetch(tex, 0); // I get a compile error here, texelFetch 
is not a recognized function name (?!). I use a Fermi card with the latest 
drivers. Confused with the different function names I found on the web 
(texelFetch, texelFetch1D/2D, texelFetchBuffer which, I understand, are 
deprecated).
[...]
}

I use texelFetchBuffer but if you solved it I guess you use the same.

Cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Texture Buffer Objects status?

2011-01-17 Thread Juan Hernando

Hi Fred,
As far as I know, they are not planeed to be part of OSG's API for 3.0. 
I wrote a class for dealing with these textures based on the code for 
other texture objects. The implementation can be improved to resuse 
Texture::TextureObject instead or redeclaring its own 
TextureBuffer::TextureBufferObject class. Nevertheless it worked for me.

I can contribute the code for others to use and review for future inclusion.

Regards,
Juan

On 17/01/11 14:55, Fred Smith wrote:

Hi everyone,

Are Texture Buffer Objects supported in OSG? From what I can see, I
have to create and manage them myself.

Cheers, Fred





-- Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=35695#35695





___ osg-users mailing
list osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Texture Buffer Objects status?

2011-01-17 Thread Juan Hernando



Hi Juan,

Sounds great. Your forum settings are not configured to accept
private messages. I'm interested in your work, if you're willing to
share some code with me drop me an email at fclXYZ.gvsat  gmail.com
(replace 'XYZ' with 'aux')

I prefer sending them to everybody so they can be improved and maybe 
include in trunk in the future.


Regards,
Juan
/* *  -*-c++-*- OpenSceneGraph - Copyright (C) 1998-2006 Robert Osfield 
 * Copyright (C) Juan Hernando Vieites 2011
 * This library is open source and may be redistributed and/or modified under  
 * the terms of the OpenSceneGraph Public License (OSGPL) version 0.0 or 
 * (at your option) any later version.  The full license is in LICENSE file
 * included with this distribution, and on the openscenegraph.org website.
 * 
 * This library is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the 
 * OpenSceneGraph Public License for more details.
*/
#include iostream

#define GL_GLEXT_PROTOTYPES
#include GL/glu.h

#include TextureBuffer.h

namespace bbp
{
namespace RTNeuron
{

//static void checkGLErrors(const std::string message)
//{
//GLenum error = glGetError();
//if (error != GL_NO_ERROR)
//std::cout  OpenGL error detected:   gluErrorString(error)
//   ,   message  std::endl;
//}

/*
  Helper classes
*/
void TextureBuffer::TextureBufferObject::bindBuffer(unsigned int contextID)
{
osg::BufferObject::Extensions* extensions = 
osg::BufferObject::getExtensions(contextID, true);
if (_id == 0) {
extensions-glGenBuffers(1, _id);
}
extensions-glBindBuffer(GL_TEXTURE_BUFFER_EXT, _id);
}

void TextureBuffer::TextureBufferObject::bindTextureBuffer
(osg::State state, GLenum internalFormat)
{
glTexBufferEXT(GL_TEXTURE_BUFFER_EXT, internalFormat, _id);
}

/*
  Member functions
*/
void TextureBuffer::apply(osg::State state) const
{
const unsigned int contextID = state.getContextID();

TextureObject* to = getTextureObject(contextID);
TextureBufferObject *tbo = _textureBufferObjectsBuffer[contextID].get();

if (to != 0) {
if (_image.valid() 
_modifiedCount[contextID] != _image-getModifiedCount()) {
/* Update the texture buffer */
tbo-bindBuffer(contextID);
glBufferSubData(GL_TEXTURE_BUFFER_EXT, 0,
_bufferSize, _image-data());
glBindBuffer(GL_TEXTURE_BUFFER_EXT, 0);
/* Update the modified tag to show that it is up to date. */
_modifiedCount[contextID] = _image-getModifiedCount();
} else if (_readPBuffer.valid()) {
std::cerr  Unsupported operation  std::endl;
}

/* Binding the texture and its texture buffer object as texture
   storage. */
to-bind();
tbo-bindTextureBuffer(state, _internalFormat);
} else if (_image.valid()  _image-data()) {
/* Temporary copy */
osg::ref_ptrosg::Image image = _image;

/* Creating the texture object */
_textureObjectBuffer[contextID] = to = 
generateTextureObject(contextID, GL_TEXTURE_BUFFER_EXT);

/* Creating the texture buffer object */
tbo = new TextureBufferObject();
_textureBufferObjectsBuffer[contextID] = tbo;

/* Compute the internal texture format, 
   this set the _internalFormat to an appropriate value. */
computeInternalFormat();
/* Computing the dimensions of the texture buffer */
_textureWidth = image-s();
_bufferSize = image-getImageSizeInBytes();
/* Binding TBO and copying data */
tbo-bindBuffer(contextID);
glBufferData(GL_TEXTURE_BUFFER_EXT, _bufferSize, _image-data(),
 tbo-_usageHint);
to-setAllocated(true);
glBindBuffer(GL_TEXTURE_BUFFER_EXT, 0);

to-bind();
tbo-bindTextureBuffer(state, _internalFormat);

/* Update the modified tag to show that it is upto date. */
_modifiedCount[contextID] = image-getModifiedCount();

/* To consider */
//if (_unrefImageDataAfterApply  areAllTextureObjectsLoaded()  
//image-getDataVariance() == STATIC)
//{
//Texture2D* non_const_this = const_castTexture2D*(this);
//non_const_this-_image = 0;
//}
} else {
/* This texture type is input only (as far as I'm concerned), so
   it doesn't work without an attached image. */
glBindBuffer(GL_TEXTURE_BUFFER_EXT, 0);
glBindTexture(GL_TEXTURE_BUFFER_EXT, 0);
}
}

void TextureBuffer::computeInternalFormat() const
{
if (_internalFormatMode != USE_USER_DEFINED_FORMAT) {
if (_internalFormatMode == USE_IMAGE_DATA_FORMAT) {
if (_image.valid())
_internalFormat = _image-getInternalTextureFormat

Re: [osg-users] Old really nasty OpenThreads bug?

2010-12-03 Thread Juan Hernando

Hi Anders,
I've tried your app on Linux and works fine. I've also reviwed the code 
and it seems to be OK. However I don't know about Windows dll stuff so 
you might be overlooking something (atexit called after process threads 
are cancelled?). In any case, as Robert says, the atexit function can be 
completely avoided using static variables and class destructors, and 
it's probably safer.
Why don't you try this modified version of your example to see what 
happens?:


#define OTBUG_LIBRARY
#include otbugdll.h

struct Deallocator
{
~Deallocator()
{
if (t != 0) {
t-cancel();
delete t;
}
}
OpenThreads::Thread *t;
};
static Deallocator deallocator;

// Register atexit that will delete the thread
void regThread( OpenThreads::Thread *t)
{
  deallocator.t = t;
}

Robert Osfield wrote:

Hi Anders,

I have just had a look at your test example and it fails to compile
under linux due to the atexit() method.

Reviewing the code I'm a bit confused why you are using atexit()
method at all.  The OSG itself has lots of threads, objects and
singleton created in difference places at different times and works
fine with a platform specific exit mechanism, so I would have thought
it should be possible for your app as well.

Robert.
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Old really nasty OpenThreads bug?

2010-12-03 Thread Juan Hernando

Hi Anders
In that case I'm afraid I cannot help you because I seldom use Windows 
and I don't know enough about dll termination issues.

Sorry,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Graphic Context + Texture Memory

2010-12-03 Thread Juan Hernando

Hi Guy,

Because of the complexity of the scene I wanted to gain a little
performance by using multiple draw threads, but I guess I will have
to choose between performance gain (i.e multi-threaded) and memory
consumption (i.e single-threaded).
Having a thread for each cull traversal increases the performance. 
However, if you have a single graphics card, having a thread per camera 
(each one with a graphics context) doesn't necessarily improve the 
performance, what is more, it can degrade it. The reason is that each 
thread will be competing for the graphics card, which is an exclusive 
resource, and the driver is forced to perform many context switchs.


Regards,
Juan

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Hidden Viewer

2010-12-01 Thread Juan Hernando

Hi Robert,

If Robert is still reading this thread, is this a known issue with a solution 
or do I have to write a bug report?


Well the problem is most likely in the way you are using the OSG, or
perhaps using an OpenGL driver that doesn't support pbuffers, if you
want to write a bug report for your own app for you own purposes then
fine.  As things stand it doesn't sound likely that it's an OSG bug.
I agree that a probable cause is that his driver or hardware does not 
support pbuffers. However, what is a bit odd to me is this message in 
his trace:
PixelBufferWin32::makeCurrentImplementation, wglMakeCurrent error: Die 
angeforderte Ressource wird bereits verwendet.
In english (using an on-line german to english translator) the message 
is: The requested resource is already in use which seems to be related 
to a threading issue.



If you really do think you've come across an OSG bug then the thing to
do is to write a small example, or modify one of the existing OSG
examples to reproduce the problem then post this.  Then others can
test the problem out first hand.  It could still be a bug in the what
you are trying to do, but others will be able to spot this and point
you at the problem.

As Oliver said, he has the same problem using the very simple an 
self-contained example program that I sent him in my second mail. I 
can't figure out where the problem is, overall considering that this 
type of code hasn't ever failed to me (now I'm curious to know if I've 
being doing something wrong for a long time). The program is very short, 
so you may want to take a look.
For your convenience, I reproduce the code here with the modifications 
that reproduce his problem :


#include osg/Geode
#include osg/ShapeDrawable
#include osgViewer/Viewer
#include osgGA/TrackballManipulator
#include osgDB/WriteFile

int main(int argc, char *argv[])
{
osg::ArgumentParser args(argc, argv);
osgViewer::Viewer viewer(args);
osg::Camera *camera = viewer.getCamera();

osg::ref_ptrosg::GraphicsContext::Traits traits = new
osg::GraphicsContext::Traits;
traits-width = 512;
traits-height = 512;
traits-pbuffer = true;
traits-readDISPLAY();
osg::GraphicsContext *gc =
osg::GraphicsContext::createGraphicsContext(traits.get());
camera-setGraphicsContext(gc);
camera-setDrawBuffer(GL_FRONT);
camera-setProjectionMatrixAsPerspective(22, 1, 0.1, 1000);
camera-setViewport(new osg::Viewport(0, 0, 512, 512));

osg::Geode *scene = new osg::Geode();
osg::Shape *sphere = new osg::Sphere(osg::Vec3(), 1);
scene-addDrawable(new osg::ShapeDrawable(sphere));
viewer.setSceneData(scene);

viewer.setCameraManipulator(new osgGA::TrackballManipulator());

osg::Image *image = new osg::Image();
camera-attach(osg::Camera::COLOR_BUFFER0, image);

viewer.realize();
viewer.run();
}

Regards,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Hidden Viewer

2010-12-01 Thread Juan Hernando



I have tested under Kubuntu 9.04 + ATI 4670 and the driver doesn't
support pbuffer but the fallback opens up a full screen window and it
works fine.

I have also tested under Kubuntu 10.10 + ATI 4670 and the driver does
support pbuffer and the app runs correctly.

In Ubuntu 9.10 with a Nvidia GTX 280 with driver version 190.53 it works 
fine.


Oliver, maybe you should try a low level pbuffer example to discard it 
as the problem. First, you can use an OpenGL extension checker and 
search for WGL_ARB_pbuffer, I think GPU-Z does the job.


Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Hidden Viewer

2010-12-01 Thread Juan Hernando

WGL_ARB_pbuffer
WGL_ARB_pixel_format
WGL_ARB_pixel_format

are supported by my nVidia Quadro FX 580 (see screenshot).
Because it sounded like a possible fix, I defined the environment variable
OSG_WIN32_NV_MULTIMON_MULTITHREAD_WORKAROUND to ON, but that changed nothing.

As far as my little knowledge goes, I think that the bug is in the osg code. 


The first step to discard a driver bug should be testing a pbuffer 
example that only uses OpenGL and WGL.


Cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Hidden Viewer

2010-12-01 Thread Juan Hernando

Hi,

I've debugged what happens, and roughly the sequence is this:

1. main(): createGraphicsContext(traits.get())
   --- PixelBufferWin32 is created successfully

2. main(): viewer.realize()
   --- PixelBufferWin32::makeCurrentImplementation() is called, and 
succeeds.


3. GraphicsThread::run()
   --- PixelBufferWin32::makeCurrentImplementation() is called, and 
fails with The requested resource is in use.
I don't know why the second makeCurrentImplementation would fail, but 
going from Juan's hunch that it was failing because some other thread 
was holding on to it (the main thread in this case), 

Good tracking here.
According to the glxMakeCurrent specification making a context current 
on one thread while another one is holding it is also a mistake and 	a 
BadAccess error should be generated.


I added the 
following code after viewer.realize(); :


viewer.stopThreading();
gc-releaseContext();
viewer.startThreading();

This made it work.
This basically means that the context needs to be released at the end of 
createGraphicsContext or some other point.


This is obviously not a solution. It worked as-is before on Linux, so 
perhaps on Windows the makeCurrent() call makes the context exclusive 
to the thread on which it was called, whereas on Linux a subsequent 
makeCurrent() will still succeed and just remove the context from the 
previous thread that had it current silently?
If your trace is correct and it's the same for Linux then I think that 
it has been a matter of luck that a Linux driver didn't complain before.
However, I've taking a look at the code searching for how contexts are 
handled and I've found a fundamental difference between the X11 and 
Win32 implementations of PixelBufferXYZ::realizeImplementation. In Win32 
makeCurrentImplementation is called but not in X11. Maybe adding a 
releaseContextImplementation(); at the end of that function solves the 
problem.


Cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Hidden Viewer

2010-11-30 Thread Juan Hernando

Hi Oliver,

I'm trying to render scenes into a buffer (e.g. an osg::Image). There
are multiple demos and examples about this (e.g.
example_osgscreencapture) but they all require some sort of visible
viewer to work. I'd like a console application that does not display
any windows.

Is there a class/working example that allows this?
Probably there's a simpler way, but you can create an off-screen viewer 
with this code:


int witdth = ...;
int height = ...;
osgViewer::Viewer viewer;
osg::Camera *camera = viewer.getCamera();
osg::ref_ptrosg::GraphicsContext::Traits traits =
new osg::GraphicsContext::Traits;
traits-x = 0;
traits-y = 0;
traits-width = width;
traits-height = height;
traits-doubleBuffer = false;
traits-sharedContext = 0;
traits-pbuffer = true;
traits-readDISPLAY();
osg::GraphicsContext *gc =
osg::GraphicsContext::createGraphicsContext(traits.get());
camera-setGraphicsContext(gc);
camera-setDrawBuffer(GL_FRONT);
camera-setReadBuffer(GL_FRONT);
camera-setViewport(new osg::Viewport(0, 0, width, height));
double fovy, aspectRatio, near, far;
camera-getProjectionMatrixAsPerspective(fovy, aspectRatio, near, far);
double newAspectRatio = double(traits-width) / double(traits-height);
double aspectRatioChange = newAspectRatio / aspectRatio;
if (aspectRatioChange != 1.0)
camera-getProjectionMatrix() *=
osg::Matrix::scale(1.0/aspectRatioChange,1.0,1.0);

// Viewer stuff like handlers, setting scene data, etc. Don't call any
// of the setUpXYZ functions.
// And don't forget to attach the image to the camera.

viewer.realize();
viewer.run();

In windows I don't know how it works but in *NIX, note that you still 
need access to the display server.


Cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Hidden Viewer

2010-11-30 Thread Juan Hernando

Dear Oliver,
Seeing that error my impression is that there is something wrong with
your OpenGL installation. However I seldom use Windows, so I may be wrong.
Can you check the program below with and without commenting the line
that enables the pbuffer and give the whole output?
The program should write a file called screenshot.png in the working
directory. That output image should be a white sphere over the default
background.

#include osg/Geode
#include osg/ShapeDrawable
#include osgViewer/Viewer
#include osgGA/TrackballManipulator
#include osgDB/WriteFile

int main(int argc, char *argv[])
{
osg::ArgumentParser args(argc, argv);
osgViewer::Viewer viewer(args);
osg::Camera *camera = viewer.getCamera();

osg::ref_ptrosg::GraphicsContext::Traits traits = new
osg::GraphicsContext::Traits;
traits-width = 512;
traits-height = 512;
traits-pbuffer = true;
traits-readDISPLAY();
osg::GraphicsContext *gc =
osg::GraphicsContext::createGraphicsContext(traits.get());
camera-setGraphicsContext(gc);
camera-setDrawBuffer(GL_FRONT);
camera-setProjectionMatrixAsPerspective(22, 1, 0.1, 1000);
camera-setViewport(new osg::Viewport(0, 0, 512, 512));

osg::Geode *scene = new osg::Geode();
osg::Shape *sphere = new osg::Sphere(osg::Vec3(), 1);
scene-addDrawable(new osg::ShapeDrawable(sphere));
viewer.setSceneData(scene);

viewer.setCameraManipulator(new osgGA::TrackballManipulator());

osg::Image *image = new osg::Image();
camera-attach(osg::Camera::COLOR_BUFFER0, image);

viewer.setThreadingModel(osgViewer::Viewer::SingleThreaded);
viewer.realize();
viewer.frame();

osgDB::writeImageFile(*image, screenshot.png);
}

Regards,
Juan


Dear Juan and Robert,

Thanks for the quick reply! I tried the code supplied by Juan but I'm
still getting the Error: OpenGL version test failed, requires valid
graphics context. and subsequent errors (many invalid operation).


As a second test I copied the complete osgViewer::setUpViewInWindow
and just added the line

Code:

traits-pbuffer = true;

to the Traits creation part. Without the line the code works but
displays a window, with the line I get the OpenGL errors.  For the
second case I get these Errors:

Code:

PixelBufferWin32::makeCurrentImplementation, wglMakeCurrent error:
Die angeforderte Ressource wird bereits verwendet.

Error: In Texture::Extensions::setupGLExtensions(..) OpenGL version
test failed, requires valid graphics context. Scaling image from
(256,256) to (0,0)

(Die angeforderte Ressource wird bereits verwendet. is german for
The Requested Resource Is in Use.)


Is this a Win32 related problem or am I missing something?

Cheers, Oliver[/code]

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Hidden Viewer

2010-11-30 Thread Juan Hernando

Oliver Neumann wrote:

Hi Juan,

Thanks for the demo code. It runs without a problem. I found the
critical line:

viewer.setThreadingModel(osgViewer::Viewer::SingleThreaded);

If i comment it out, the demo code crashes with the same error text
as above. Very weird... 
Indeed, I added that line because in other threading models the 
viewer.frame() function returns inmediately. In that case 
osgDB::writeImageFile will be most probably accessing an invalid image 
object (not allocated or not rendered) and will fail. However the error 
I get if I remove that line is quite different.


 I tried to add this line to my code, but I
 still have some minor bugs to fix. Appart from that, is it possible
 to run a hidden viewer in a multithreaded fashion

As Robert already answered, it should be possible.
The best explanation that I have for your problem is that some thread 
fails to do wglMakeCurrent (after that call fails anything that follows 
is just garbage) beacuse another thread is holding the context 
(according to 
http://www.opengl.org/sdk/docs/man/xhtml/glXMakeCurrent.xml, that's a 
programming error in GLX, so I guess the same applies in WGL). All 
OpenGL calls from that thread are going to fail after that.


If the sample code that I sent you also complains with the error message:
PixelBufferWin32::makeCurrentImplementation, wglMakeCurrent error: Die 
angeforderte Ressource wird bereits verwendet
then, there is a threading error somewhere. However I can't spot it the 
code I sent you and I'm afraid that's beyond my knowledge of OSG's guts. 
Robert may be more insightful here.


You can try commenting the line that changes the threading model and 
replacing frame() by run() to see if it also fails.


Hope that helps,
Juan




___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Transparency Invisibility

2010-11-23 Thread Juan Hernando

Hi Mathieu


I am currently playing with opacity/transparency capabilities of geodes in a 
model.
In that way, i use:

material = (osg::Material *) 
Geode-getStateSet()-getAttribute(osg::StateAttribute::MATERIAL);
Geode-getStateSet()-setMode( GL_BLEND, osg::StateAttribute::ON );
material-setTransparency(osg::Material::FRONT, 1.-opacity);
Geode-getStateSet()-setAttributeAndModes(material, 
osg::StateAttribute::OVERRIDE);

It works, the geode becomes transparent.

But is there a way to see objects behind this geode ?
Most probably, your problem is that the transparent object is rendered 
first. Then, everything that is behind doesn't pass the z-test and it's 
culled away. In order to see trhough transparent geometry you have to 
render it after all opaque objects have been rendered. Adding:

Geode-getStateSet()-setRenderBinDetails(1, transparent);
you will force that geode to be rendered in a different render bin which 
will be processed after the default render bin.
If you have more than one transparent object you also have to make sure 
that they are rendered back to front. I think that's done with:

Geode-getStateSet()-setRenderingHint(osg::StateSet::TRANSPARENT_BIN);
but I'm not fully sure.

I'd also add:
osg::BlendFunc *fuct = new osg::BlendFunc();
func-setFunction(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Geode-getStateSet()-setAttributeAndModes(func);
I don't know it that's the default blending function but I guess that's 
the effect you are looking for. Setting it explicitly can't be harmful.



I mean : if i use Geode-setNodeMask(0x00); the geode is invisible and I 
can see what is behind, but not with the transparency effect.
Setting such a node mask makes all scenegraph traversals ignore that 
node, that's why it becomes invisible.



Please, do you have an idea how managing the code if i want to see what is 
behind a transparent object ?
Keep in mind that the code above only works correctly for 
non-intersecting convex objects without back faces and without 
visibility cycles (so actually it seldom works). In any other case you 
may see artifacts depending on the view point. If you need reliable 
transparency you will have to implement a much more sophisticated 
technique, such as depth peeling, bucket depth peeling or this one 
https://graphics.stanford.edu/wikis/cs448s-10/FrontPage?action=AttachFiledo=gettarget=CS448s-10-11-oit.pdf 
if your graphics card supports Shader Model 5.


Regards,
Juan

___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] High Pitch noise

2010-11-22 Thread Juan Hernando

Hi Louis,

I have a similar problem to what you describe with a GTX 280. The 
difference is in my case the noise clearly comes from the integrated 
sound device and I hear it on the headset. Moreover it can be heard even 
with glxgears, different pitch depending on the frame rate. I don't pay 
much attention to it but I guess it's a prove of the quality of the 
integrated sound device.


If you think the noise comes from the graphics card make sure it isn't 
actually coming from the PSU. Running OpenGL at full speed increases the 
power draw of the graphics card and a high pitched noise from the PSU is 
a symptom that some capacitor is about to break down.


Regards,
Juan


I just installed osg and I tried the osganimate demo, on windows and
linux. I'm observing that on fullscreen mode, when I zoom in or out,
it appears a high pitch noise, probably from my graphics card ATI
Radeon HD 5850 with frequency that depends on the distance from the
object showed.

If I enable the VSync the noise disappears, with also the FPS.

I'm asking that here, because I'm getting it only with osg. In the
past I worked with opengl, ogre, and I never got this issue.

What do you suggest me to do?

Thank you very much for any kind of help.

Cheers.


___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Instanced geometry using a (non power of two)texture1D like a vertex buffer.

2010-09-23 Thread Juan Hernando

Hi Jordi,
I think the problem is in this line of code in your vertex shader:

vec4 pos= gl_Vertex+ 
vec4(texture1D(texturePos,(float(gl_InstanceID)/nInstances)).xyz,0.);


I think that texels are placed in the [0..1] range using their centers 
and not the left edge as your code assumes. That means that if you have 
to texels they are not placed at 0 and 0.5 or 0 and 1 but 0.25 and 0.75. 
It's difficut to assure what happens in 0.5. In short, the nearest texel 
may not be the one you are expecting (only with two images, my guess is 
that the 9th square overdraws the 8th). If your hardware supports it, 
use texelFetch1D, which admits integer coordinates. Maybe adding 0.5 to 
gl_InstanceID can also do the job.


Finally, remember that 1D textures are limited to 8192 texels in current 
hardware (at least mine). You'll need a Texture Buffer Object if you 
need more.


Regards,
Juan
Jordi Torres wrote:

Hi all,

I am trying to show instanced geometry using a texture1D like a vertex
buffer. In fact I have it working more or less... but if my Texture 1D is
non power of two then my shader does not render some of the instances.
Attached is the code and two images, one with a non power of two texture(9
elements) that fails to draw last element, and other with 256 elements that
is drawn correctly.

I am using TransferFunction1D to pass my geometry array to a texture1D.
Anybody knows what I'm doing wrong? Of course I have set
texture-setResizeNonPowerOfTwoHint(false);

Thank you in advance!

Jordi.



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] GL_R32F (and others) float textures are being normalized

2010-09-21 Thread Juan Hernando

Hi Werner and Robert

Thanks for the answers.


Clamping to 0.0 to 1.0 range is standard for OpenGL texturing.  I
believe this now a new GL extension for a float format that isn't
clamped to the 0.0 to 1.0 range, so have a look on OpenGL .org and
other places online for further info.

In ARB_texture_float
(http://www.opengl.org/registry/specs/ARB/texture_float.txt) the 
overview says:

Floating-point components are clamped to the limits of the range
 representable by their format.
And issue 7:
Are floating-point values clamped for the fixed-function GL?
 [...] For the programmable pipelines, no clamping occurs.
If it has been changed, I understand from the revision log that 
user-controlled clamping was added in 2004, but only for fragment 
operations (which I don't care because I'm reading the texture, not 
writing to it). The only transformation should be for fixed point 
formats (like the typical GL_RGBA, not integer texture formats) that are 
normalized to [0..1].


As I understand, the current osg::Texture class is already aware of the 
differences between fixed point, integer and float formats. However, I 
have found neither code inside Texture.cpp nor specific wording in the 
spec for user control of color component clamping/normalization during 
texturing. It seems that there isn't such a thing and the GL 
implementation just chooses the correct behaviour depending on the 
texture format.


GL_R32F comes from ARB_texture_rg for GL 2.1 and it's in GL 3.0 core.
I've used these and GL_RG32F formats before as target formats for FBOs, 
and by default the results are written and read by the shaders without 
clamping as expected.
My best guess is that the driver is clamping GL_R32F when glTexImage1D 
is called, hence I'd dare to say it's a driver bug. That osg::Texture 
doesn't compute the correct InternalFormatType for these relatively new 
formats is inconsistent but should be harmless.
By the way, I'm using the NVIDIA linux driver version 256.44 with a 2.0 
context in case someone is curious enough to try other setups.


Nevertheless, I've realized my problem is easily solved using 
GL_LUMINANCE32F_ARB instead of the more bizarre GL_R32F.


Regards,
Juan



___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] GL_R32F (and others) float textures are being normalized

2010-09-21 Thread Juan Hernando

Hi J.P.,
we're using unclamped textures just fine. Also have a look at the 
difference between glFragColor and glFragData.
I think you misunderstood the actual problem. The texture is not written 
by a fragment shader. I'm filling the texture from the client side and 
it is inside the vertex shader where clamped values are returned by the 
texture sampler. And this happens for GL_R32F but not for GL_RGBA32F or 
GL_LUMINANCE32F.


Anyways, as I stated in my previous mail I'll just use GL_LUMINANCE32F 
instead of GL_32F.


Thanks and cheers,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] GL_R32F (and others) float textures are being normalized

2010-09-20 Thread Juan Hernando

Dear all,
I'm writing some GLSL code that needs to access a 1D floating point 
texture as input in a vertex shader. My problem is that I'm getting 
clamped/normalized (not sure which one) values inside GLSL instead of 
the full range values.


For debug purposes I've setup a dummy texture like this:
  osg::Image *image = new osg::Image;
  float *tmp = new float;
  image-setImage(1, 1, 1, GL_R32F, GL_RED, GL_FLOAT,
  (unsigned char*)tmp, osg::Image::USE_NEW_DELETE);
  tmp[0] = 2
  osg::Texture1D *texture = new osg::Texture1D();
  texture-setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST);
  texture-setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);
  texture-setImage(image);

In this case, the following GLSL expression:
  texture1D(the_texture, 0.0).r
returns 1.

But if I change the image setup to:
  float *tmp = new float[4];
  image-setImage(1, 1, 1, GL_RGBA32F, GL_RGBA, GL_FLOAT,
  (unsigned char*)tmp, osg::Image::USE_NEW_DELETE);
it works fine.

Looking at the source code from Texture.cpp I've found that 
Texture::computeInternalFormatType() does not deal with GL_R32F, 
GL_RG32F, GL_R32UI, ... they all fall into the default clause of the 
switch statement which assigns _internalFormatType to NORMALIZED. At the 
same time I've found no member function to change that attribute manually.

Is that an omission or am I doing something wrong in the initialization?
If that's an omission, is there an easy workaround that doesn't require 
recompiling the library?


Thanks and best regards,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


[osg-users] Typo in include/osg/ImageUtils

2010-01-18 Thread Juan Hernando

Dear all,
Doing some tests with osgvolume we've found a typo in some code dealing 
with RGBA images. It's located at include/osg/ImageUtils line 88:

 case(GL_RGBA):  { for(unsigned int i=0;inum;++i) { float
r = float(*data)*scale; float g = float(*(data+1))*scale; float b =
float(*(data+2))*scale; float a = float(*(data+3))*scale;
operation.rgba(r,g,b,a); *data++ = T(r*inv_scale); *data++ =
T(g*inv_scale); *data++ = T(g*inv_scale); *data++ = T(a*inv_scale); }
}  break;

Notice that g*inv_scale appears instead of b*inv_scale near the end of 
the line.

I've checked through the web interface that the trunk still has the typo.

Regards,
Juan
___
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org


Re: [osg-users] Typo in include/osg/ImageUtils

2010-01-18 Thread Juan Hernando

Robert Osfield wrote:

Hi Juan,

Could you post your fix as a whole modified source file, as you email
is rather difficult to follow.

OK, find the file attached.

Cheers,
Juan
/* -*-c++-*- OpenSceneGraph - Copyright (C) 1998-2006 Robert Osfield 
 *
 * This library is open source and may be redistributed and/or modified under  
 * the terms of the OpenSceneGraph Public License (OSGPL) version 0.0 or 
 * (at your option) any later version.  The full license is in LICENSE file
 * included with this distribution, and on the openscenegraph.org website.
 * 
 * This library is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the 
 * OpenSceneGraph Public License for more details.
*/

#ifndef OSG_IMAGEUTILS
#define OSG_IMAGEUTILS 1

#include osg/Export

#include osg/Image

namespace osg {

template typename T, class O
void _readRow(unsigned int num, GLenum pixelFormat, const T* data,float scale, 
O operation)
{
switch(pixelFormat)
{
case(GL_LUMINANCE): { for(unsigned int i=0;inum;++i) { float l 
= float(*data++)*scale; operation.luminance(l); } }  break;
case(GL_ALPHA): { for(unsigned int i=0;inum;++i) { float a 
= float(*data++)*scale; operation.alpha(a); } }  break;
case(GL_LUMINANCE_ALPHA):   { for(unsigned int i=0;inum;++i) { float l 
= float(*data++)*scale; float a = float(*data++)*scale; 
operation.luminance_alpha(l,a); } }  break;
case(GL_RGB):   { for(unsigned int i=0;inum;++i) { float r 
= float(*data++)*scale; float g = float(*data++)*scale; float b = 
float(*data++)*scale; operation.rgb(r,g,b); } }  break;
case(GL_RGBA):  { for(unsigned int i=0;inum;++i) { float r 
= float(*data++)*scale; float g = float(*data++)*scale; float b = 
float(*data++)*scale; float a = float(*data++)*scale; operation.rgba(r,g,b,a); 
} }  break;
case(GL_BGR):   { for(unsigned int i=0;inum;++i) { float b 
= float(*data++)*scale; float g = float(*data++)*scale; float r = 
float(*data++)*scale; operation.rgb(r,g,b); } }  break;
case(GL_BGRA):  { for(unsigned int i=0;inum;++i) { float b 
= float(*data++)*scale; float g = float(*data++)*scale; float r = 
float(*data++)*scale; float a = float(*data++)*scale; operation.rgba(r,g,b,a); 
} }  break;
}
}

template class O
void readRow(unsigned int num, GLenum pixelFormat, GLenum dataType, const 
unsigned char* data, O operation)
{
switch(dataType)
{
case(GL_BYTE):  _readRow(num,pixelFormat, (const 
char*)data,1.0f/128.0f,operation); break;
case(GL_UNSIGNED_BYTE): _readRow(num,pixelFormat, (const unsigned 
char*)data,   1.0f/255.0f,operation); break;
case(GL_SHORT): _readRow(num,pixelFormat, (const short*) 
data,  1.0f/32768.0f,  operation); break;
case(GL_UNSIGNED_SHORT):_readRow(num,pixelFormat, (const unsigned 
short*)data,  1.0f/65535.0f,  operation); break;
case(GL_INT):   _readRow(num,pixelFormat, (const int*) 
data,1.0f/2147483648.0f, operation); break;
case(GL_UNSIGNED_INT):  _readRow(num,pixelFormat, (const unsigned 
int*) data,   1.0f/4294967295.0f, operation); break;
case(GL_FLOAT): _readRow(num,pixelFormat, (const float*) 
data,  1.0f,   operation); break;
}
}

template class O
void readImage(const osg::Image* image, O operation)
{
if (!image) return;

for(int r=0;rimage-r();++r)
{
for(int t=0;timage-t();++t)
{
readRow(image-s(), image-getPixelFormat(), image-getDataType(), 
image-data(0,t,r), operation);
}
}
}

// example ModifyOperator
// struct ModifyOperator
// {
// inline void luminance(float l) const {} 
// inline void alpha(float a) const {} 
// inline void luminance_alpha(float l,float a) const {} 
// inline void rgb(float r,float g,float b) const {}
// inline void rgba(float r,float g,float b,float a) const {}
// };


template typename T, class M
void _modifyRow(unsigned int num, GLenum pixelFormat, T* data,float scale, 
const M operation)
{
float inv_scale = 1.0f/scale;
switch(pixelFormat)
{
case(GL_LUMINANCE): { for(unsigned int i=0;inum;++i) { float l 
= float(*data)*scale; operation.luminance(l); *data++ = T(l*inv_scale); } }  
break;
case(GL_ALPHA): { for(unsigned int i=0;inum;++i) { float a 
= float(*data)*scale; operation.alpha(a); *data++ = T(a*inv_scale); } }  break;
case(GL_LUMINANCE_ALPHA):   { for(unsigned int i=0;inum;++i) { float l 
= float(*data)*scale; float a = float(*(data+1))*scale; 
operation.luminance_alpha(l,a); *data++ = T(l*inv_scale); *data++ = 
T(a*inv_scale); } }  break;
case(GL_RGB):   { for(unsigned int 

Re: [osg-users] Typo in include/osg/ImageUtils

2010-01-18 Thread Juan Hernando

Hi
The same file seems to have another typo at line 90. I've attached a new 
version, please use that one for the comparison.


Cheers,
Juan
/* -*-c++-*- OpenSceneGraph - Copyright (C) 1998-2006 Robert Osfield 
 *
 * This library is open source and may be redistributed and/or modified under  
 * the terms of the OpenSceneGraph Public License (OSGPL) version 0.0 or 
 * (at your option) any later version.  The full license is in LICENSE file
 * included with this distribution, and on the openscenegraph.org website.
 * 
 * This library is distributed in the hope that it will be useful,
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the 
 * OpenSceneGraph Public License for more details.
*/

#ifndef OSG_IMAGEUTILS
#define OSG_IMAGEUTILS 1

#include osg/Export

#include osg/Image

namespace osg {

template typename T, class O
void _readRow(unsigned int num, GLenum pixelFormat, const T* data,float scale, 
O operation)
{
switch(pixelFormat)
{
case(GL_LUMINANCE): { for(unsigned int i=0;inum;++i) { float l 
= float(*data++)*scale; operation.luminance(l); } }  break;
case(GL_ALPHA): { for(unsigned int i=0;inum;++i) { float a 
= float(*data++)*scale; operation.alpha(a); } }  break;
case(GL_LUMINANCE_ALPHA):   { for(unsigned int i=0;inum;++i) { float l 
= float(*data++)*scale; float a = float(*data++)*scale; 
operation.luminance_alpha(l,a); } }  break;
case(GL_RGB):   { for(unsigned int i=0;inum;++i) { float r 
= float(*data++)*scale; float g = float(*data++)*scale; float b = 
float(*data++)*scale; operation.rgb(r,g,b); } }  break;
case(GL_RGBA):  { for(unsigned int i=0;inum;++i) { float r 
= float(*data++)*scale; float g = float(*data++)*scale; float b = 
float(*data++)*scale; float a = float(*data++)*scale; operation.rgba(r,g,b,a); 
} }  break;
case(GL_BGR):   { for(unsigned int i=0;inum;++i) { float b 
= float(*data++)*scale; float g = float(*data++)*scale; float r = 
float(*data++)*scale; operation.rgb(r,g,b); } }  break;
case(GL_BGRA):  { for(unsigned int i=0;inum;++i) { float b 
= float(*data++)*scale; float g = float(*data++)*scale; float r = 
float(*data++)*scale; float a = float(*data++)*scale; operation.rgba(r,g,b,a); 
} }  break;
}
}

template class O
void readRow(unsigned int num, GLenum pixelFormat, GLenum dataType, const 
unsigned char* data, O operation)
{
switch(dataType)
{
case(GL_BYTE):  _readRow(num,pixelFormat, (const 
char*)data,1.0f/128.0f,operation); break;
case(GL_UNSIGNED_BYTE): _readRow(num,pixelFormat, (const unsigned 
char*)data,   1.0f/255.0f,operation); break;
case(GL_SHORT): _readRow(num,pixelFormat, (const short*) 
data,  1.0f/32768.0f,  operation); break;
case(GL_UNSIGNED_SHORT):_readRow(num,pixelFormat, (const unsigned 
short*)data,  1.0f/65535.0f,  operation); break;
case(GL_INT):   _readRow(num,pixelFormat, (const int*) 
data,1.0f/2147483648.0f, operation); break;
case(GL_UNSIGNED_INT):  _readRow(num,pixelFormat, (const unsigned 
int*) data,   1.0f/4294967295.0f, operation); break;
case(GL_FLOAT): _readRow(num,pixelFormat, (const float*) 
data,  1.0f,   operation); break;
}
}

template class O
void readImage(const osg::Image* image, O operation)
{
if (!image) return;

for(int r=0;rimage-r();++r)
{
for(int t=0;timage-t();++t)
{
readRow(image-s(), image-getPixelFormat(), image-getDataType(), 
image-data(0,t,r), operation);
}
}
}

// example ModifyOperator
// struct ModifyOperator
// {
// inline void luminance(float l) const {} 
// inline void alpha(float a) const {} 
// inline void luminance_alpha(float l,float a) const {} 
// inline void rgb(float r,float g,float b) const {}
// inline void rgba(float r,float g,float b,float a) const {}
// };


template typename T, class M
void _modifyRow(unsigned int num, GLenum pixelFormat, T* data,float scale, 
const M operation)
{
float inv_scale = 1.0f/scale;
switch(pixelFormat)
{
case(GL_LUMINANCE): { for(unsigned int i=0;inum;++i) { float l 
= float(*data)*scale; operation.luminance(l); *data++ = T(l*inv_scale); } }  
break;
case(GL_ALPHA): { for(unsigned int i=0;inum;++i) { float a 
= float(*data)*scale; operation.alpha(a); *data++ = T(a*inv_scale); } }  break;
case(GL_LUMINANCE_ALPHA):   { for(unsigned int i=0;inum;++i) { float l 
= float(*data)*scale; float a = float(*(data+1))*scale; 
operation.luminance_alpha(l,a); *data++ = T(l*inv_scale); *data++ = 
T(a*inv_scale); } }  break;
case(GL_RGB):   { for(unsigned int i=0;inum;++i) { float r 
= float(*data)*scale;