Hi,
 
I am trying to write an intersect post-processing function that will read the 
alpha value of the hit-location.
I have so far managed to get the texture coordinates for each collision point.
I have also managed to find which texture (in unit 0) that is used by the 
object that I collide with.
 
I have a pointer to the texture:
    osg::Texture2D* texture;
 
The scene I am intersecting is read from a pre-created file and all osgImage 
data is automatically deleted. So, I can not always use texture->getImage() and 
read the pixels from there.
 
In the cases where getImage returns a NULL pointer:
    osg::Texture::TextureObject* to = texture->getTextureObject( 0 );
    glBindTexture( GL_TEXTURE_2d, to->_id );
    unsigned char data[4];
    glReadPixels( xLocation, yLocation, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE, data );
    glBindTexture( GL_TEXTURE_2D, 0 );
 
This code compiles and it runs. The problem is that it always returns the same 
values for RGB and A, nomatter what my xLocation and yLocation is.
 
Can anybody see what I am doing wrong?
 
Regards,
Viggo
 
_________________________________________________________________
Morsomme klipp, kjendiser og mer på MSN Video.
http://video.msn.com/?mkt=nb-no&from=HMTAG
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to