On 10/23/2012 11:59 AM, Christoph Heindl wrote:
Hi Jason,

On Tue, Oct 23, 2012 at 5:27 PM, Jason Daly <[email protected] <mailto:[email protected]>> wrote:


    There's no difference between these two.  ARB_multitexture is a
    15-year old extension that simply provides the specification for
    how multitexturing is done.  Multitexturing has been part of
    standard OpenGL since version 1.3.


Ok I see, I stumbled upon this terms when looking at

http://updraft.github.com/osgearth-doc/html/classosgEarth_1_1TextureCompositor.html


     - using multi-pass rendering. Probably slower but not limited by
    hardware.

    I doubt you'll need to resort to this, but with the vague
    description of what you're doing, I can't be 100% sure.


Actually what I do is that I have a mesh that is generated from depth-maps. In a post-processing step I want to apply photos (taken by arbitrary cameras, but with known intrinsics) as textures. What I know is the position of which the photo was taken (relativ to the mesh) and the camera intrinsics.

OK, that makes sense. It doesn't change what I said earlier, you can still do this with projective texturing.


How can TexGen and a shader help here? Would it allow me to calculate the UV coordinates for a given photo (camera position etc.) and the mesh?


The more I think about it, the more I think you'll want to use a shader for this. The basis for your technique will be the EYE_LINEAR TexGen mode that old-fashioned projective texturing used, so you'll probably want to read up on that. There's some sample code written in pure OpenGL here:

http://www.sgi.com/products/software/opengl/examples/glut/advanced/source/projtex.c

The equation used for EYE_LINEAR TexGen is given in the OpenGL spec. You can also find it in the man page for glTexGen, available here:

http://www.opengl.org/sdk/docs/man2/xhtml/glTexGen.xml


Once you're familiar with that technique, you'll probably be able to come up with a specific technique that works better for your situation.

Another benefit of using shaders is that you'll be able to do any blending, exposure compensation, etc. that you might be needing to do really easily as part of the texturing process.



I wanted to avoid splitting the mesh, at least for the internal representation (which I hoped included visualization). Pros and cons have been discussed in this thread (in case you are interested)

https://groups.google.com/d/topic/reconstructme/sDb_A-n6_A0/discussion

You might not need to segment the mesh. If you don't segment the mesh, it means that you'll have to have all of your photo textures active at the same time. Most modern graphics cards can handle at least 8, decent mid range gaming cards can handle as many as 64, and the high-end enthusiast cards can even hit 128. If you're photo count is less than this number for your hardware, you'll probably be OK. You'll just need to encode which photo or photos are important for each vertex, so you can look them up in the shader, you'd do this as a vertex attribute.

Your photo texture samplers will be one set of uniforms, and you'll need another set to encode the photo position and orientation, as these will be needed to calculate the texture coordinates. You won't need to pass texture coordinates as vertex attributes, because you'll be generating them in the vertex shader. As long as you don't have more than a few photos per vertex, you shouldn't have any issues with the limited number of vertex attributes that can be passed between shader stages.

--"J"
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to