On Tue, Oct 23, 2012 at 8:40 PM, Jason Daly <jd...@ist.ucf.edu> wrote:

> How can TexGen and a shader help here? Would it allow me to calculate the
> UV coordinates for a given photo (camera position etc.) and the mesh?
>
>
>
> The more I think about it, the more I think you'll want to use a shader
> for this.  The basis for your technique will be the EYE_LINEAR TexGen mode
> that old-fashioned projective texturing used, so you'll probably want to
> read up on that.  There's some sample code written in pure OpenGL here:
>
>
> http://www.sgi.com/products/software/opengl/examples/glut/advanced/source/projtex.c
>
> The equation used for EYE_LINEAR TexGen is given in the OpenGL spec.  You
> can also find it in the man page for glTexGen, available here:
>
> http://www.opengl.org/sdk/docs/man2/xhtml/glTexGen.xml
>

Thanks for the hints. From browsing the documentation it seems though that
this would also texture non-visible triangles (i.e backfacing triangles)?
This however would lead to messy texturing, since I have photos from around
the mesh.


>
>
>
> Once you're familiar with that technique, you'll probably be able to come
> up with a specific technique that works better for your situation.
>
> Another benefit of using shaders is that you'll be able to do any
> blending, exposure compensation, etc. that you might be needing to do
> really easily as part of the texturing process.
>

I think I'd need a starter sample for how to use EYE_LINEAR in combination
with a shader.


> You might not need to segment the mesh.  If you don't segment the mesh, it
> means that you'll have to have all of your photo textures active at the
> same time.  Most modern graphics cards can handle at least 8, decent mid
> range gaming cards can handle as many as 64, and the high-end enthusiast
> cards can even hit 128.  If you're photo count is less than this number for
> your hardware, you'll probably be OK.  You'll just need to encode which
> photo or photos are important for each vertex, so you can look them up in
> the shader, you'd do this as a vertex attribute.
>

Since ReconstructMe itself requires a decent graphics card we are ok with
that approach. I assume that there won't be more than 64 fotos per mesh and
I assume that a vertex would never be textured from more than 3 fotos.


>
> Your photo texture samplers will be one set of uniforms, and you'll need
> another set to encode the photo position and orientation, as these will be
> needed to calculate the texture coordinates.  You won't need to pass
> texture coordinates as vertex attributes, because you'll be generating them
> in the vertex shader.  As long as you don't have more than a few photos per
> vertex, you shouldn't have any issues with the limited number of vertex
> attributes that can be passed between shader stages.
>

That sounds interesting. You don't have an example at hands?

A rather related problem is the export of meshes. Using the shader
approach, how would one deliver such a textured mesh? Wouldn't it make more
sense to  pre-calucate the UV coordinates for each vertex/foto infront
(incl. visibility detection) and pass this to the shader (possible?), so
that I could decide on file export (based on the format chosen) to either
 - split the mesh if the format does not support multiple texture layers
(.ply/.obj maybe).
 - don't split the mesh if the format supports multiple layers (.fbx)

Thanks for your time,
Christoph
_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to